00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 984 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3646 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.143 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.143 The recommended git tool is: git 00:00:00.144 using credential 00000000-0000-0000-0000-000000000002 00:00:00.145 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.197 Fetching changes from the remote Git repository 00:00:00.199 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.246 Using shallow fetch with depth 1 00:00:00.246 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.246 > git --version # timeout=10 00:00:00.287 > git --version # 'git version 2.39.2' 00:00:00.287 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.316 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.316 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.982 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.995 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:10.006 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:10.006 > git config core.sparsecheckout # timeout=10 00:00:10.016 > git read-tree -mu HEAD # timeout=10 00:00:10.031 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:10.052 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:10.052 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:10.157 [Pipeline] Start of Pipeline 00:00:10.174 [Pipeline] library 00:00:10.176 Loading library shm_lib@master 00:00:10.176 Library shm_lib@master is cached. Copying from home. 00:00:10.190 [Pipeline] node 00:00:10.201 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:10.203 [Pipeline] { 00:00:10.214 [Pipeline] catchError 00:00:10.216 [Pipeline] { 00:00:10.231 [Pipeline] wrap 00:00:10.240 [Pipeline] { 00:00:10.245 [Pipeline] stage 00:00:10.247 [Pipeline] { (Prologue) 00:00:10.261 [Pipeline] echo 00:00:10.262 Node: VM-host-SM9 00:00:10.267 [Pipeline] cleanWs 00:00:10.275 [WS-CLEANUP] Deleting project workspace... 00:00:10.275 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.281 [WS-CLEANUP] done 00:00:10.454 [Pipeline] setCustomBuildProperty 00:00:10.538 [Pipeline] httpRequest 00:00:11.131 [Pipeline] echo 00:00:11.133 Sorcerer 10.211.164.20 is alive 00:00:11.142 [Pipeline] retry 00:00:11.144 [Pipeline] { 00:00:11.159 [Pipeline] httpRequest 00:00:11.163 HttpMethod: GET 00:00:11.163 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.164 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.175 Response Code: HTTP/1.1 200 OK 00:00:11.176 Success: Status code 200 is in the accepted range: 200,404 00:00:11.177 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.236 [Pipeline] } 00:00:14.253 [Pipeline] // retry 00:00:14.260 [Pipeline] sh 00:00:14.545 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.561 [Pipeline] httpRequest 00:00:14.908 [Pipeline] echo 00:00:14.910 Sorcerer 10.211.164.20 is alive 00:00:14.920 [Pipeline] retry 00:00:14.922 [Pipeline] { 00:00:14.936 [Pipeline] httpRequest 00:00:14.940 HttpMethod: GET 00:00:14.941 URL: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:14.942 Sending request to url: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:14.964 Response Code: HTTP/1.1 200 OK 00:00:14.964 Success: Status code 200 is in the accepted range: 200,404 00:00:14.965 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:01:23.515 [Pipeline] } 00:01:23.533 [Pipeline] // retry 00:01:23.544 [Pipeline] sh 00:01:23.826 + tar --no-same-owner -xf spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:01:27.130 [Pipeline] sh 00:01:27.428 + git -C spdk log --oneline -n5 00:01:27.428 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:01:27.428 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:01:27.428 029355612 bdev_ut: add manual examine bdev unit test case 00:01:27.428 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:01:27.428 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:01:27.450 [Pipeline] withCredentials 00:01:27.462 > git --version # timeout=10 00:01:27.474 > git --version # 'git version 2.39.2' 00:01:27.491 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:27.493 [Pipeline] { 00:01:27.506 [Pipeline] retry 00:01:27.509 [Pipeline] { 00:01:27.526 [Pipeline] sh 00:01:27.807 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:27.820 [Pipeline] } 00:01:27.842 [Pipeline] // retry 00:01:27.847 [Pipeline] } 00:01:27.867 [Pipeline] // withCredentials 00:01:27.879 [Pipeline] httpRequest 00:01:28.255 [Pipeline] echo 00:01:28.257 Sorcerer 10.211.164.20 is alive 00:01:28.267 [Pipeline] retry 00:01:28.269 [Pipeline] { 00:01:28.284 [Pipeline] httpRequest 00:01:28.288 HttpMethod: GET 00:01:28.289 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:28.290 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:28.291 Response Code: HTTP/1.1 200 OK 00:01:28.291 Success: Status code 200 is in the accepted range: 200,404 00:01:28.292 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:38.511 [Pipeline] } 00:01:38.528 [Pipeline] // retry 00:01:38.535 [Pipeline] sh 00:01:38.816 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:40.208 [Pipeline] sh 00:01:40.489 + git -C dpdk log --oneline -n5 00:01:40.489 eeb0605f11 version: 23.11.0 00:01:40.489 238778122a doc: update release notes for 23.11 00:01:40.489 46aa6b3cfc doc: fix description of RSS features 00:01:40.489 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:40.489 7e421ae345 devtools: support skipping forbid rule check 00:01:40.506 [Pipeline] writeFile 00:01:40.521 [Pipeline] sh 00:01:40.804 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:40.817 [Pipeline] sh 00:01:41.102 + cat autorun-spdk.conf 00:01:41.102 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.102 SPDK_TEST_NVMF=1 00:01:41.102 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.102 SPDK_TEST_URING=1 00:01:41.102 SPDK_TEST_VFIOUSER=1 00:01:41.102 SPDK_TEST_USDT=1 00:01:41.102 SPDK_RUN_UBSAN=1 00:01:41.102 NET_TYPE=virt 00:01:41.102 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:41.102 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:41.102 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.109 RUN_NIGHTLY=1 00:01:41.111 [Pipeline] } 00:01:41.125 [Pipeline] // stage 00:01:41.143 [Pipeline] stage 00:01:41.145 [Pipeline] { (Run VM) 00:01:41.159 [Pipeline] sh 00:01:41.444 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:41.444 + echo 'Start stage prepare_nvme.sh' 00:01:41.444 Start stage prepare_nvme.sh 00:01:41.444 + [[ -n 5 ]] 00:01:41.444 + disk_prefix=ex5 00:01:41.444 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:41.444 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:41.444 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:41.444 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.444 ++ SPDK_TEST_NVMF=1 00:01:41.444 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.444 ++ SPDK_TEST_URING=1 00:01:41.444 ++ SPDK_TEST_VFIOUSER=1 00:01:41.444 ++ SPDK_TEST_USDT=1 00:01:41.444 ++ SPDK_RUN_UBSAN=1 00:01:41.444 ++ NET_TYPE=virt 00:01:41.444 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:41.444 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:41.444 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.444 ++ RUN_NIGHTLY=1 00:01:41.444 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:41.444 + nvme_files=() 00:01:41.444 + declare -A nvme_files 00:01:41.444 + backend_dir=/var/lib/libvirt/images/backends 00:01:41.444 + nvme_files['nvme.img']=5G 00:01:41.444 + nvme_files['nvme-cmb.img']=5G 00:01:41.444 + nvme_files['nvme-multi0.img']=4G 00:01:41.444 + nvme_files['nvme-multi1.img']=4G 00:01:41.444 + nvme_files['nvme-multi2.img']=4G 00:01:41.444 + nvme_files['nvme-openstack.img']=8G 00:01:41.444 + nvme_files['nvme-zns.img']=5G 00:01:41.444 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:41.444 + (( SPDK_TEST_FTL == 1 )) 00:01:41.444 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:41.444 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:41.444 + for nvme in "${!nvme_files[@]}" 00:01:41.444 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:41.444 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.444 + for nvme in "${!nvme_files[@]}" 00:01:41.444 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:41.444 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.444 + for nvme in "${!nvme_files[@]}" 00:01:41.444 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:41.444 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:41.444 + for nvme in "${!nvme_files[@]}" 00:01:41.444 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:41.444 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.444 + for nvme in "${!nvme_files[@]}" 00:01:41.444 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:41.444 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.444 + for nvme in "${!nvme_files[@]}" 00:01:41.444 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:41.704 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:41.704 + for nvme in "${!nvme_files[@]}" 00:01:41.704 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:41.704 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:41.704 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:41.704 + echo 'End stage prepare_nvme.sh' 00:01:41.704 End stage prepare_nvme.sh 00:01:41.724 [Pipeline] sh 00:01:42.006 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:42.006 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:42.265 00:01:42.265 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:42.265 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:42.265 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:42.265 HELP=0 00:01:42.265 DRY_RUN=0 00:01:42.265 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:42.265 NVME_DISKS_TYPE=nvme,nvme, 00:01:42.265 NVME_AUTO_CREATE=0 00:01:42.265 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:42.265 NVME_CMB=,, 00:01:42.265 NVME_PMR=,, 00:01:42.265 NVME_ZNS=,, 00:01:42.265 NVME_MS=,, 00:01:42.265 NVME_FDP=,, 00:01:42.265 SPDK_VAGRANT_DISTRO=fedora39 00:01:42.265 SPDK_VAGRANT_VMCPU=10 00:01:42.265 SPDK_VAGRANT_VMRAM=12288 00:01:42.265 SPDK_VAGRANT_PROVIDER=libvirt 00:01:42.265 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:42.265 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:42.265 SPDK_OPENSTACK_NETWORK=0 00:01:42.265 VAGRANT_PACKAGE_BOX=0 00:01:42.265 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:42.265 FORCE_DISTRO=true 00:01:42.265 VAGRANT_BOX_VERSION= 00:01:42.265 EXTRA_VAGRANTFILES= 00:01:42.265 NIC_MODEL=e1000 00:01:42.265 00:01:42.265 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:42.265 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:45.554 Bringing machine 'default' up with 'libvirt' provider... 00:01:45.554 ==> default: Creating image (snapshot of base box volume). 00:01:45.813 ==> default: Creating domain with the following settings... 00:01:45.813 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732031812_e49b50b280fd345ca7ad 00:01:45.813 ==> default: -- Domain type: kvm 00:01:45.813 ==> default: -- Cpus: 10 00:01:45.813 ==> default: -- Feature: acpi 00:01:45.813 ==> default: -- Feature: apic 00:01:45.813 ==> default: -- Feature: pae 00:01:45.813 ==> default: -- Memory: 12288M 00:01:45.813 ==> default: -- Memory Backing: hugepages: 00:01:45.813 ==> default: -- Management MAC: 00:01:45.813 ==> default: -- Loader: 00:01:45.813 ==> default: -- Nvram: 00:01:45.813 ==> default: -- Base box: spdk/fedora39 00:01:45.813 ==> default: -- Storage pool: default 00:01:45.813 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732031812_e49b50b280fd345ca7ad.img (20G) 00:01:45.813 ==> default: -- Volume Cache: default 00:01:45.813 ==> default: -- Kernel: 00:01:45.813 ==> default: -- Initrd: 00:01:45.813 ==> default: -- Graphics Type: vnc 00:01:45.813 ==> default: -- Graphics Port: -1 00:01:45.813 ==> default: -- Graphics IP: 127.0.0.1 00:01:45.813 ==> default: -- Graphics Password: Not defined 00:01:45.813 ==> default: -- Video Type: cirrus 00:01:45.814 ==> default: -- Video VRAM: 9216 00:01:45.814 ==> default: -- Sound Type: 00:01:45.814 ==> default: -- Keymap: en-us 00:01:45.814 ==> default: -- TPM Path: 00:01:45.814 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:45.814 ==> default: -- Command line args: 00:01:45.814 ==> default: -> value=-device, 00:01:45.814 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:45.814 ==> default: -> value=-drive, 00:01:45.814 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:45.814 ==> default: -> value=-device, 00:01:45.814 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.814 ==> default: -> value=-device, 00:01:45.814 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:45.814 ==> default: -> value=-drive, 00:01:45.814 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:45.814 ==> default: -> value=-device, 00:01:45.814 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.814 ==> default: -> value=-drive, 00:01:45.814 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:45.814 ==> default: -> value=-device, 00:01:45.814 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.814 ==> default: -> value=-drive, 00:01:45.814 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:45.814 ==> default: -> value=-device, 00:01:45.814 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:45.814 ==> default: Creating shared folders metadata... 00:01:45.814 ==> default: Starting domain. 00:01:47.193 ==> default: Waiting for domain to get an IP address... 00:02:02.105 ==> default: Waiting for SSH to become available... 00:02:03.499 ==> default: Configuring and enabling network interfaces... 00:02:07.692 default: SSH address: 192.168.121.23:22 00:02:07.692 default: SSH username: vagrant 00:02:07.692 default: SSH auth method: private key 00:02:10.228 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:18.346 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:23.617 ==> default: Mounting SSHFS shared folder... 00:02:24.995 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:24.996 ==> default: Checking Mount.. 00:02:25.933 ==> default: Folder Successfully Mounted! 00:02:25.933 ==> default: Running provisioner: file... 00:02:26.870 default: ~/.gitconfig => .gitconfig 00:02:27.129 00:02:27.129 SUCCESS! 00:02:27.129 00:02:27.129 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:27.129 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:27.129 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:27.129 00:02:27.138 [Pipeline] } 00:02:27.152 [Pipeline] // stage 00:02:27.161 [Pipeline] dir 00:02:27.162 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:27.164 [Pipeline] { 00:02:27.176 [Pipeline] catchError 00:02:27.178 [Pipeline] { 00:02:27.190 [Pipeline] sh 00:02:27.470 + vagrant ssh-config --host vagrant 00:02:27.470 + sed -ne /^Host/,$p 00:02:27.470 + tee ssh_conf 00:02:30.757 Host vagrant 00:02:30.757 HostName 192.168.121.23 00:02:30.757 User vagrant 00:02:30.757 Port 22 00:02:30.757 UserKnownHostsFile /dev/null 00:02:30.757 StrictHostKeyChecking no 00:02:30.757 PasswordAuthentication no 00:02:30.757 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:30.757 IdentitiesOnly yes 00:02:30.757 LogLevel FATAL 00:02:30.757 ForwardAgent yes 00:02:30.757 ForwardX11 yes 00:02:30.757 00:02:30.769 [Pipeline] withEnv 00:02:30.771 [Pipeline] { 00:02:30.783 [Pipeline] sh 00:02:31.064 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:31.064 source /etc/os-release 00:02:31.064 [[ -e /image.version ]] && img=$(< /image.version) 00:02:31.064 # Minimal, systemd-like check. 00:02:31.064 if [[ -e /.dockerenv ]]; then 00:02:31.064 # Clear garbage from the node's name: 00:02:31.064 # agt-er_autotest_547-896 -> autotest_547-896 00:02:31.064 # $HOSTNAME is the actual container id 00:02:31.064 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:31.064 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:31.064 # We can assume this is a mount from a host where container is running, 00:02:31.064 # so fetch its hostname to easily identify the target swarm worker. 00:02:31.064 container="$(< /etc/hostname) ($agent)" 00:02:31.064 else 00:02:31.064 # Fallback 00:02:31.064 container=$agent 00:02:31.064 fi 00:02:31.064 fi 00:02:31.064 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:31.064 00:02:31.332 [Pipeline] } 00:02:31.349 [Pipeline] // withEnv 00:02:31.356 [Pipeline] setCustomBuildProperty 00:02:31.371 [Pipeline] stage 00:02:31.372 [Pipeline] { (Tests) 00:02:31.387 [Pipeline] sh 00:02:31.665 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:31.937 [Pipeline] sh 00:02:32.215 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:32.489 [Pipeline] timeout 00:02:32.489 Timeout set to expire in 1 hr 0 min 00:02:32.491 [Pipeline] { 00:02:32.504 [Pipeline] sh 00:02:32.779 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:33.347 HEAD is now at dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:02:33.359 [Pipeline] sh 00:02:33.639 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:33.911 [Pipeline] sh 00:02:34.204 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:34.259 [Pipeline] sh 00:02:34.563 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:34.822 ++ readlink -f spdk_repo 00:02:34.822 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:34.822 + [[ -n /home/vagrant/spdk_repo ]] 00:02:34.822 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:34.822 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:34.822 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:34.822 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:34.822 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:34.822 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:34.822 + cd /home/vagrant/spdk_repo 00:02:34.822 + source /etc/os-release 00:02:34.822 ++ NAME='Fedora Linux' 00:02:34.822 ++ VERSION='39 (Cloud Edition)' 00:02:34.822 ++ ID=fedora 00:02:34.822 ++ VERSION_ID=39 00:02:34.822 ++ VERSION_CODENAME= 00:02:34.822 ++ PLATFORM_ID=platform:f39 00:02:34.822 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:34.822 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:34.822 ++ LOGO=fedora-logo-icon 00:02:34.822 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:34.822 ++ HOME_URL=https://fedoraproject.org/ 00:02:34.822 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:34.822 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:34.822 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:34.822 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:34.822 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:34.822 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:34.822 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:34.822 ++ SUPPORT_END=2024-11-12 00:02:34.822 ++ VARIANT='Cloud Edition' 00:02:34.822 ++ VARIANT_ID=cloud 00:02:34.822 + uname -a 00:02:34.822 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:34.822 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:35.389 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:35.389 Hugepages 00:02:35.389 node hugesize free / total 00:02:35.389 node0 1048576kB 0 / 0 00:02:35.389 node0 2048kB 0 / 0 00:02:35.389 00:02:35.389 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:35.389 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:35.389 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:35.389 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:35.389 + rm -f /tmp/spdk-ld-path 00:02:35.389 + source autorun-spdk.conf 00:02:35.389 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.389 ++ SPDK_TEST_NVMF=1 00:02:35.389 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.389 ++ SPDK_TEST_URING=1 00:02:35.389 ++ SPDK_TEST_VFIOUSER=1 00:02:35.389 ++ SPDK_TEST_USDT=1 00:02:35.389 ++ SPDK_RUN_UBSAN=1 00:02:35.389 ++ NET_TYPE=virt 00:02:35.389 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:35.389 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.389 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.389 ++ RUN_NIGHTLY=1 00:02:35.389 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:35.389 + [[ -n '' ]] 00:02:35.389 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:35.389 + for M in /var/spdk/build-*-manifest.txt 00:02:35.389 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:35.389 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.389 + for M in /var/spdk/build-*-manifest.txt 00:02:35.389 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:35.389 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.389 + for M in /var/spdk/build-*-manifest.txt 00:02:35.390 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:35.390 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:35.390 ++ uname 00:02:35.390 + [[ Linux == \L\i\n\u\x ]] 00:02:35.390 + sudo dmesg -T 00:02:35.390 + sudo dmesg --clear 00:02:35.390 + dmesg_pid=5995 00:02:35.390 + [[ Fedora Linux == FreeBSD ]] 00:02:35.390 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.390 + sudo dmesg -Tw 00:02:35.390 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:35.390 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:35.390 + [[ -x /usr/src/fio-static/fio ]] 00:02:35.390 + export FIO_BIN=/usr/src/fio-static/fio 00:02:35.390 + FIO_BIN=/usr/src/fio-static/fio 00:02:35.390 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:35.390 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:35.390 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:35.390 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.390 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:35.390 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:35.390 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.390 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:35.390 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.649 15:57:42 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:35.649 15:57:42 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_VFIOUSER=1 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_USDT=1 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@8 -- $ NET_TYPE=virt 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@11 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:35.649 15:57:42 -- spdk_repo/autorun-spdk.conf@12 -- $ RUN_NIGHTLY=1 00:02:35.649 15:57:42 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:35.649 15:57:42 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:35.649 15:57:42 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:35.649 15:57:42 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:35.649 15:57:42 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:35.649 15:57:42 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:35.649 15:57:42 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:35.649 15:57:42 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:35.649 15:57:42 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.649 15:57:42 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.649 15:57:42 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.649 15:57:42 -- paths/export.sh@5 -- $ export PATH 00:02:35.649 15:57:42 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:35.649 15:57:42 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:35.649 15:57:42 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:35.649 15:57:42 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732031862.XXXXXX 00:02:35.649 15:57:42 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732031862.mSie5E 00:02:35.649 15:57:42 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:35.649 15:57:42 -- common/autobuild_common.sh@492 -- $ '[' -n v23.11 ']' 00:02:35.649 15:57:42 -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:35.649 15:57:42 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:35.649 15:57:42 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:35.649 15:57:42 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:35.649 15:57:42 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:35.649 15:57:42 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:35.649 15:57:42 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.649 15:57:42 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:35.649 15:57:42 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:35.649 15:57:42 -- pm/common@17 -- $ local monitor 00:02:35.649 15:57:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.649 15:57:42 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:35.649 15:57:42 -- pm/common@25 -- $ sleep 1 00:02:35.649 15:57:42 -- pm/common@21 -- $ date +%s 00:02:35.649 15:57:42 -- pm/common@21 -- $ date +%s 00:02:35.649 15:57:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732031862 00:02:35.649 15:57:42 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732031862 00:02:35.649 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732031862_collect-cpu-load.pm.log 00:02:35.649 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732031862_collect-vmstat.pm.log 00:02:36.585 15:57:43 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:36.585 15:57:43 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:36.585 15:57:43 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:36.585 15:57:43 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:36.585 15:57:43 -- spdk/autobuild.sh@16 -- $ date -u 00:02:36.585 Tue Nov 19 03:57:43 PM UTC 2024 00:02:36.585 15:57:43 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:36.585 v25.01-pre-197-gdcc2ca8f3 00:02:36.585 15:57:43 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:36.585 15:57:43 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:36.585 15:57:43 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:36.585 15:57:43 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:36.585 15:57:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.585 15:57:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.585 ************************************ 00:02:36.585 START TEST ubsan 00:02:36.585 ************************************ 00:02:36.585 using ubsan 00:02:36.585 15:57:43 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:36.585 00:02:36.585 real 0m0.000s 00:02:36.585 user 0m0.000s 00:02:36.585 sys 0m0.000s 00:02:36.585 15:57:43 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:36.585 15:57:43 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:36.585 ************************************ 00:02:36.585 END TEST ubsan 00:02:36.585 ************************************ 00:02:36.845 15:57:43 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:36.845 15:57:43 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:36.845 15:57:43 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:36.845 15:57:43 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:36.845 15:57:43 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:36.845 15:57:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:36.845 ************************************ 00:02:36.845 START TEST build_native_dpdk 00:02:36.845 ************************************ 00:02:36.845 15:57:43 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:36.845 eeb0605f11 version: 23.11.0 00:02:36.845 238778122a doc: update release notes for 23.11 00:02:36.845 46aa6b3cfc doc: fix description of RSS features 00:02:36.845 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:36.845 7e421ae345 devtools: support skipping forbid rule check 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:36.845 15:57:43 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:36.846 patching file config/rte_config.h 00:02:36.846 Hunk #1 succeeded at 60 (offset 1 line). 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:36.846 patching file lib/pcapng/rte_pcapng.c 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 23.11.0 24.07.0 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:36.846 15:57:43 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:36.846 15:57:43 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:42.119 The Meson build system 00:02:42.119 Version: 1.5.0 00:02:42.119 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:42.119 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:42.119 Build type: native build 00:02:42.119 Program cat found: YES (/usr/bin/cat) 00:02:42.119 Project name: DPDK 00:02:42.119 Project version: 23.11.0 00:02:42.119 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.119 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:42.119 Host machine cpu family: x86_64 00:02:42.119 Host machine cpu: x86_64 00:02:42.119 Message: ## Building in Developer Mode ## 00:02:42.119 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.119 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:42.119 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.119 Program python3 found: YES (/usr/bin/python3) 00:02:42.119 Program cat found: YES (/usr/bin/cat) 00:02:42.119 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:42.119 Compiler for C supports arguments -march=native: YES 00:02:42.119 Checking for size of "void *" : 8 00:02:42.119 Checking for size of "void *" : 8 (cached) 00:02:42.119 Library m found: YES 00:02:42.119 Library numa found: YES 00:02:42.119 Has header "numaif.h" : YES 00:02:42.119 Library fdt found: NO 00:02:42.119 Library execinfo found: NO 00:02:42.119 Has header "execinfo.h" : YES 00:02:42.119 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.119 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.119 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.119 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.119 Run-time dependency openssl found: YES 3.1.1 00:02:42.119 Run-time dependency libpcap found: YES 1.10.4 00:02:42.119 Has header "pcap.h" with dependency libpcap: YES 00:02:42.119 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.119 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.119 Compiler for C supports arguments -Wformat: YES 00:02:42.119 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.119 Compiler for C supports arguments -Wformat-security: NO 00:02:42.119 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.119 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.119 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.119 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.119 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.119 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.119 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.119 Compiler for C supports arguments -Wundef: YES 00:02:42.119 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.119 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.119 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.119 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.119 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.119 Program objdump found: YES (/usr/bin/objdump) 00:02:42.119 Compiler for C supports arguments -mavx512f: YES 00:02:42.119 Checking if "AVX512 checking" compiles: YES 00:02:42.119 Fetching value of define "__SSE4_2__" : 1 00:02:42.119 Fetching value of define "__AES__" : 1 00:02:42.119 Fetching value of define "__AVX__" : 1 00:02:42.119 Fetching value of define "__AVX2__" : 1 00:02:42.119 Fetching value of define "__AVX512BW__" : (undefined) 00:02:42.119 Fetching value of define "__AVX512CD__" : (undefined) 00:02:42.119 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:42.119 Fetching value of define "__AVX512F__" : (undefined) 00:02:42.119 Fetching value of define "__AVX512VL__" : (undefined) 00:02:42.119 Fetching value of define "__PCLMUL__" : 1 00:02:42.119 Fetching value of define "__RDRND__" : 1 00:02:42.119 Fetching value of define "__RDSEED__" : 1 00:02:42.119 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:42.119 Fetching value of define "__znver1__" : (undefined) 00:02:42.119 Fetching value of define "__znver2__" : (undefined) 00:02:42.119 Fetching value of define "__znver3__" : (undefined) 00:02:42.119 Fetching value of define "__znver4__" : (undefined) 00:02:42.119 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.119 Message: lib/log: Defining dependency "log" 00:02:42.119 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.119 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.119 Checking for function "getentropy" : NO 00:02:42.119 Message: lib/eal: Defining dependency "eal" 00:02:42.119 Message: lib/ring: Defining dependency "ring" 00:02:42.119 Message: lib/rcu: Defining dependency "rcu" 00:02:42.119 Message: lib/mempool: Defining dependency "mempool" 00:02:42.119 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.119 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.119 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.119 Compiler for C supports arguments -mpclmul: YES 00:02:42.119 Compiler for C supports arguments -maes: YES 00:02:42.119 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.119 Compiler for C supports arguments -mavx512bw: YES 00:02:42.119 Compiler for C supports arguments -mavx512dq: YES 00:02:42.119 Compiler for C supports arguments -mavx512vl: YES 00:02:42.119 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.119 Compiler for C supports arguments -mavx2: YES 00:02:42.119 Compiler for C supports arguments -mavx: YES 00:02:42.119 Message: lib/net: Defining dependency "net" 00:02:42.119 Message: lib/meter: Defining dependency "meter" 00:02:42.119 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.119 Message: lib/pci: Defining dependency "pci" 00:02:42.119 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.119 Message: lib/metrics: Defining dependency "metrics" 00:02:42.119 Message: lib/hash: Defining dependency "hash" 00:02:42.119 Message: lib/timer: Defining dependency "timer" 00:02:42.119 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.119 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:42.119 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:42.119 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:42.119 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:42.119 Message: lib/acl: Defining dependency "acl" 00:02:42.119 Message: lib/bbdev: Defining dependency "bbdev" 00:02:42.119 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:42.119 Run-time dependency libelf found: YES 0.191 00:02:42.119 Message: lib/bpf: Defining dependency "bpf" 00:02:42.119 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:42.119 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.119 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.119 Message: lib/distributor: Defining dependency "distributor" 00:02:42.120 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.120 Message: lib/efd: Defining dependency "efd" 00:02:42.120 Message: lib/eventdev: Defining dependency "eventdev" 00:02:42.120 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:42.120 Message: lib/gpudev: Defining dependency "gpudev" 00:02:42.120 Message: lib/gro: Defining dependency "gro" 00:02:42.120 Message: lib/gso: Defining dependency "gso" 00:02:42.120 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:42.120 Message: lib/jobstats: Defining dependency "jobstats" 00:02:42.120 Message: lib/latencystats: Defining dependency "latencystats" 00:02:42.120 Message: lib/lpm: Defining dependency "lpm" 00:02:42.120 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.120 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:42.120 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:42.120 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:42.120 Message: lib/member: Defining dependency "member" 00:02:42.120 Message: lib/pcapng: Defining dependency "pcapng" 00:02:42.120 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.120 Message: lib/power: Defining dependency "power" 00:02:42.120 Message: lib/rawdev: Defining dependency "rawdev" 00:02:42.120 Message: lib/regexdev: Defining dependency "regexdev" 00:02:42.120 Message: lib/mldev: Defining dependency "mldev" 00:02:42.120 Message: lib/rib: Defining dependency "rib" 00:02:42.120 Message: lib/reorder: Defining dependency "reorder" 00:02:42.120 Message: lib/sched: Defining dependency "sched" 00:02:42.120 Message: lib/security: Defining dependency "security" 00:02:42.120 Message: lib/stack: Defining dependency "stack" 00:02:42.120 Has header "linux/userfaultfd.h" : YES 00:02:42.120 Has header "linux/vduse.h" : YES 00:02:42.120 Message: lib/vhost: Defining dependency "vhost" 00:02:42.120 Message: lib/ipsec: Defining dependency "ipsec" 00:02:42.120 Message: lib/pdcp: Defining dependency "pdcp" 00:02:42.120 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:42.120 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:42.120 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:42.120 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:42.120 Message: lib/fib: Defining dependency "fib" 00:02:42.120 Message: lib/port: Defining dependency "port" 00:02:42.120 Message: lib/pdump: Defining dependency "pdump" 00:02:42.120 Message: lib/table: Defining dependency "table" 00:02:42.120 Message: lib/pipeline: Defining dependency "pipeline" 00:02:42.120 Message: lib/graph: Defining dependency "graph" 00:02:42.120 Message: lib/node: Defining dependency "node" 00:02:42.120 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:44.024 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:44.024 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:44.024 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:44.024 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:44.024 Compiler for C supports arguments -Wno-unused-value: YES 00:02:44.024 Compiler for C supports arguments -Wno-format: YES 00:02:44.024 Compiler for C supports arguments -Wno-format-security: YES 00:02:44.024 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:44.024 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:44.024 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:44.024 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:44.024 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:44.024 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:44.024 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:44.024 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:44.024 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:44.024 Has header "sys/epoll.h" : YES 00:02:44.024 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:44.024 Configuring doxy-api-html.conf using configuration 00:02:44.024 Configuring doxy-api-man.conf using configuration 00:02:44.024 Program mandb found: YES (/usr/bin/mandb) 00:02:44.024 Program sphinx-build found: NO 00:02:44.024 Configuring rte_build_config.h using configuration 00:02:44.024 Message: 00:02:44.024 ================= 00:02:44.024 Applications Enabled 00:02:44.024 ================= 00:02:44.024 00:02:44.024 apps: 00:02:44.024 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:44.024 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:44.024 test-pmd, test-regex, test-sad, test-security-perf, 00:02:44.024 00:02:44.024 Message: 00:02:44.024 ================= 00:02:44.024 Libraries Enabled 00:02:44.024 ================= 00:02:44.024 00:02:44.024 libs: 00:02:44.024 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:44.024 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:44.024 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:44.024 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:44.024 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:44.024 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:44.024 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:44.024 00:02:44.024 00:02:44.024 Message: 00:02:44.024 =============== 00:02:44.024 Drivers Enabled 00:02:44.024 =============== 00:02:44.024 00:02:44.024 common: 00:02:44.024 00:02:44.024 bus: 00:02:44.024 pci, vdev, 00:02:44.024 mempool: 00:02:44.024 ring, 00:02:44.024 dma: 00:02:44.024 00:02:44.024 net: 00:02:44.024 i40e, 00:02:44.024 raw: 00:02:44.024 00:02:44.024 crypto: 00:02:44.024 00:02:44.024 compress: 00:02:44.024 00:02:44.024 regex: 00:02:44.024 00:02:44.024 ml: 00:02:44.024 00:02:44.024 vdpa: 00:02:44.024 00:02:44.024 event: 00:02:44.024 00:02:44.024 baseband: 00:02:44.024 00:02:44.024 gpu: 00:02:44.024 00:02:44.024 00:02:44.024 Message: 00:02:44.024 ================= 00:02:44.024 Content Skipped 00:02:44.024 ================= 00:02:44.024 00:02:44.024 apps: 00:02:44.024 00:02:44.024 libs: 00:02:44.024 00:02:44.024 drivers: 00:02:44.024 common/cpt: not in enabled drivers build config 00:02:44.024 common/dpaax: not in enabled drivers build config 00:02:44.024 common/iavf: not in enabled drivers build config 00:02:44.024 common/idpf: not in enabled drivers build config 00:02:44.024 common/mvep: not in enabled drivers build config 00:02:44.024 common/octeontx: not in enabled drivers build config 00:02:44.024 bus/auxiliary: not in enabled drivers build config 00:02:44.024 bus/cdx: not in enabled drivers build config 00:02:44.024 bus/dpaa: not in enabled drivers build config 00:02:44.024 bus/fslmc: not in enabled drivers build config 00:02:44.024 bus/ifpga: not in enabled drivers build config 00:02:44.024 bus/platform: not in enabled drivers build config 00:02:44.024 bus/vmbus: not in enabled drivers build config 00:02:44.024 common/cnxk: not in enabled drivers build config 00:02:44.024 common/mlx5: not in enabled drivers build config 00:02:44.024 common/nfp: not in enabled drivers build config 00:02:44.024 common/qat: not in enabled drivers build config 00:02:44.024 common/sfc_efx: not in enabled drivers build config 00:02:44.024 mempool/bucket: not in enabled drivers build config 00:02:44.024 mempool/cnxk: not in enabled drivers build config 00:02:44.024 mempool/dpaa: not in enabled drivers build config 00:02:44.024 mempool/dpaa2: not in enabled drivers build config 00:02:44.024 mempool/octeontx: not in enabled drivers build config 00:02:44.024 mempool/stack: not in enabled drivers build config 00:02:44.024 dma/cnxk: not in enabled drivers build config 00:02:44.024 dma/dpaa: not in enabled drivers build config 00:02:44.024 dma/dpaa2: not in enabled drivers build config 00:02:44.024 dma/hisilicon: not in enabled drivers build config 00:02:44.024 dma/idxd: not in enabled drivers build config 00:02:44.024 dma/ioat: not in enabled drivers build config 00:02:44.024 dma/skeleton: not in enabled drivers build config 00:02:44.024 net/af_packet: not in enabled drivers build config 00:02:44.024 net/af_xdp: not in enabled drivers build config 00:02:44.024 net/ark: not in enabled drivers build config 00:02:44.024 net/atlantic: not in enabled drivers build config 00:02:44.024 net/avp: not in enabled drivers build config 00:02:44.024 net/axgbe: not in enabled drivers build config 00:02:44.024 net/bnx2x: not in enabled drivers build config 00:02:44.024 net/bnxt: not in enabled drivers build config 00:02:44.024 net/bonding: not in enabled drivers build config 00:02:44.024 net/cnxk: not in enabled drivers build config 00:02:44.024 net/cpfl: not in enabled drivers build config 00:02:44.024 net/cxgbe: not in enabled drivers build config 00:02:44.024 net/dpaa: not in enabled drivers build config 00:02:44.024 net/dpaa2: not in enabled drivers build config 00:02:44.024 net/e1000: not in enabled drivers build config 00:02:44.024 net/ena: not in enabled drivers build config 00:02:44.024 net/enetc: not in enabled drivers build config 00:02:44.024 net/enetfec: not in enabled drivers build config 00:02:44.024 net/enic: not in enabled drivers build config 00:02:44.024 net/failsafe: not in enabled drivers build config 00:02:44.024 net/fm10k: not in enabled drivers build config 00:02:44.024 net/gve: not in enabled drivers build config 00:02:44.024 net/hinic: not in enabled drivers build config 00:02:44.024 net/hns3: not in enabled drivers build config 00:02:44.024 net/iavf: not in enabled drivers build config 00:02:44.024 net/ice: not in enabled drivers build config 00:02:44.024 net/idpf: not in enabled drivers build config 00:02:44.024 net/igc: not in enabled drivers build config 00:02:44.024 net/ionic: not in enabled drivers build config 00:02:44.024 net/ipn3ke: not in enabled drivers build config 00:02:44.024 net/ixgbe: not in enabled drivers build config 00:02:44.024 net/mana: not in enabled drivers build config 00:02:44.024 net/memif: not in enabled drivers build config 00:02:44.024 net/mlx4: not in enabled drivers build config 00:02:44.024 net/mlx5: not in enabled drivers build config 00:02:44.024 net/mvneta: not in enabled drivers build config 00:02:44.024 net/mvpp2: not in enabled drivers build config 00:02:44.024 net/netvsc: not in enabled drivers build config 00:02:44.024 net/nfb: not in enabled drivers build config 00:02:44.024 net/nfp: not in enabled drivers build config 00:02:44.024 net/ngbe: not in enabled drivers build config 00:02:44.024 net/null: not in enabled drivers build config 00:02:44.024 net/octeontx: not in enabled drivers build config 00:02:44.024 net/octeon_ep: not in enabled drivers build config 00:02:44.024 net/pcap: not in enabled drivers build config 00:02:44.024 net/pfe: not in enabled drivers build config 00:02:44.024 net/qede: not in enabled drivers build config 00:02:44.024 net/ring: not in enabled drivers build config 00:02:44.024 net/sfc: not in enabled drivers build config 00:02:44.024 net/softnic: not in enabled drivers build config 00:02:44.024 net/tap: not in enabled drivers build config 00:02:44.024 net/thunderx: not in enabled drivers build config 00:02:44.024 net/txgbe: not in enabled drivers build config 00:02:44.024 net/vdev_netvsc: not in enabled drivers build config 00:02:44.024 net/vhost: not in enabled drivers build config 00:02:44.024 net/virtio: not in enabled drivers build config 00:02:44.024 net/vmxnet3: not in enabled drivers build config 00:02:44.024 raw/cnxk_bphy: not in enabled drivers build config 00:02:44.024 raw/cnxk_gpio: not in enabled drivers build config 00:02:44.024 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:44.024 raw/ifpga: not in enabled drivers build config 00:02:44.024 raw/ntb: not in enabled drivers build config 00:02:44.024 raw/skeleton: not in enabled drivers build config 00:02:44.024 crypto/armv8: not in enabled drivers build config 00:02:44.024 crypto/bcmfs: not in enabled drivers build config 00:02:44.024 crypto/caam_jr: not in enabled drivers build config 00:02:44.024 crypto/ccp: not in enabled drivers build config 00:02:44.024 crypto/cnxk: not in enabled drivers build config 00:02:44.024 crypto/dpaa_sec: not in enabled drivers build config 00:02:44.024 crypto/dpaa2_sec: not in enabled drivers build config 00:02:44.024 crypto/ipsec_mb: not in enabled drivers build config 00:02:44.025 crypto/mlx5: not in enabled drivers build config 00:02:44.025 crypto/mvsam: not in enabled drivers build config 00:02:44.025 crypto/nitrox: not in enabled drivers build config 00:02:44.025 crypto/null: not in enabled drivers build config 00:02:44.025 crypto/octeontx: not in enabled drivers build config 00:02:44.025 crypto/openssl: not in enabled drivers build config 00:02:44.025 crypto/scheduler: not in enabled drivers build config 00:02:44.025 crypto/uadk: not in enabled drivers build config 00:02:44.025 crypto/virtio: not in enabled drivers build config 00:02:44.025 compress/isal: not in enabled drivers build config 00:02:44.025 compress/mlx5: not in enabled drivers build config 00:02:44.025 compress/octeontx: not in enabled drivers build config 00:02:44.025 compress/zlib: not in enabled drivers build config 00:02:44.025 regex/mlx5: not in enabled drivers build config 00:02:44.025 regex/cn9k: not in enabled drivers build config 00:02:44.025 ml/cnxk: not in enabled drivers build config 00:02:44.025 vdpa/ifc: not in enabled drivers build config 00:02:44.025 vdpa/mlx5: not in enabled drivers build config 00:02:44.025 vdpa/nfp: not in enabled drivers build config 00:02:44.025 vdpa/sfc: not in enabled drivers build config 00:02:44.025 event/cnxk: not in enabled drivers build config 00:02:44.025 event/dlb2: not in enabled drivers build config 00:02:44.025 event/dpaa: not in enabled drivers build config 00:02:44.025 event/dpaa2: not in enabled drivers build config 00:02:44.025 event/dsw: not in enabled drivers build config 00:02:44.025 event/opdl: not in enabled drivers build config 00:02:44.025 event/skeleton: not in enabled drivers build config 00:02:44.025 event/sw: not in enabled drivers build config 00:02:44.025 event/octeontx: not in enabled drivers build config 00:02:44.025 baseband/acc: not in enabled drivers build config 00:02:44.025 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:44.025 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:44.025 baseband/la12xx: not in enabled drivers build config 00:02:44.025 baseband/null: not in enabled drivers build config 00:02:44.025 baseband/turbo_sw: not in enabled drivers build config 00:02:44.025 gpu/cuda: not in enabled drivers build config 00:02:44.025 00:02:44.025 00:02:44.025 Build targets in project: 220 00:02:44.025 00:02:44.025 DPDK 23.11.0 00:02:44.025 00:02:44.025 User defined options 00:02:44.025 libdir : lib 00:02:44.025 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:44.025 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:44.025 c_link_args : 00:02:44.025 enable_docs : false 00:02:44.025 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:44.025 enable_kmods : false 00:02:44.025 machine : native 00:02:44.025 tests : false 00:02:44.025 00:02:44.025 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.025 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:44.284 15:57:50 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:44.284 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:44.284 [1/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:44.284 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.284 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.284 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.284 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.284 [6/710] Linking static target lib/librte_kvargs.a 00:02:44.543 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.543 [8/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:44.543 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.543 [10/710] Linking static target lib/librte_log.a 00:02:44.543 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.801 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.801 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.801 [14/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:45.060 [15/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.060 [16/710] Linking target lib/librte_log.so.24.0 00:02:45.060 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:45.060 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:45.319 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:45.319 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:45.319 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:45.319 [22/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:45.319 [23/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:45.319 [24/710] Linking target lib/librte_kvargs.so.24.0 00:02:45.577 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:45.577 [26/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:45.577 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:45.577 [28/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:45.577 [29/710] Linking static target lib/librte_telemetry.a 00:02:45.577 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:45.836 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:45.836 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:45.836 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:46.094 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:46.094 [35/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.094 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:46.094 [37/710] Linking target lib/librte_telemetry.so.24.0 00:02:46.094 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:46.094 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:46.094 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:46.094 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:46.094 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:46.094 [43/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:46.353 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:46.353 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:46.611 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:46.611 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:46.611 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:46.870 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:46.870 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:46.870 [51/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:46.870 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:46.870 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:46.870 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:47.129 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:47.129 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:47.129 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:47.129 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.388 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:47.388 [60/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:47.388 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:47.388 [62/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:47.388 [63/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:47.388 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:47.647 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:47.647 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:47.647 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:47.647 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:47.905 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:47.905 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:47.905 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:47.905 [72/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:48.163 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.163 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.163 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.163 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:48.163 [77/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.163 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:48.421 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:48.421 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:48.680 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:48.680 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:48.680 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:48.680 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:48.680 [85/710] Linking static target lib/librte_ring.a 00:02:48.938 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:48.938 [87/710] Linking static target lib/librte_eal.a 00:02:48.938 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:48.938 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.938 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.197 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.197 [92/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.197 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.197 [94/710] Linking static target lib/librte_mempool.a 00:02:49.197 [95/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.455 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.455 [97/710] Linking static target lib/librte_rcu.a 00:02:49.455 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:49.455 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:49.714 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.714 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.714 [102/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.714 [103/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.972 [104/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.972 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.972 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:50.230 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:50.230 [108/710] Linking static target lib/librte_mbuf.a 00:02:50.230 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:50.230 [110/710] Linking static target lib/librte_net.a 00:02:50.230 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:50.230 [112/710] Linking static target lib/librte_meter.a 00:02:50.489 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.489 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:50.489 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.489 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.489 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:50.747 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.747 [119/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.313 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:51.313 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.571 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.571 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.571 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:51.571 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.829 [126/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:51.829 [127/710] Linking static target lib/librte_pci.a 00:02:51.829 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:51.829 [129/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.829 [130/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.087 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:52.087 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:52.087 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:52.087 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:52.087 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:52.087 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:52.087 [137/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:52.087 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:52.087 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:52.087 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:52.345 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:52.345 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:52.603 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:52.603 [144/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:52.603 [145/710] Linking static target lib/librte_cmdline.a 00:02:52.861 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:52.861 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:52.861 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:52.861 [149/710] Linking static target lib/librte_metrics.a 00:02:53.119 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:53.119 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.377 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.377 [153/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:53.377 [154/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:53.377 [155/710] Linking static target lib/librte_timer.a 00:02:53.943 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.201 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:54.201 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:54.201 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:54.459 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:54.717 [161/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:54.975 [162/710] Linking static target lib/librte_ethdev.a 00:02:54.975 [163/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:54.975 [164/710] Linking static target lib/librte_bitratestats.a 00:02:54.975 [165/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:54.975 [166/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.975 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:55.233 [168/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.233 [169/710] Linking target lib/librte_eal.so.24.0 00:02:55.233 [170/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:55.233 [171/710] Linking static target lib/librte_bbdev.a 00:02:55.233 [172/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:55.233 [173/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:55.233 [174/710] Linking static target lib/librte_hash.a 00:02:55.233 [175/710] Linking target lib/librte_ring.so.24.0 00:02:55.233 [176/710] Linking target lib/librte_meter.so.24.0 00:02:55.492 [177/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:55.492 [178/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:55.492 [179/710] Linking target lib/librte_rcu.so.24.0 00:02:55.492 [180/710] Linking target lib/librte_mempool.so.24.0 00:02:55.492 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:55.492 [182/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:55.492 [183/710] Linking target lib/librte_pci.so.24.0 00:02:55.751 [184/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:55.751 [185/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:55.751 [186/710] Linking target lib/librte_mbuf.so.24.0 00:02:55.751 [187/710] Linking target lib/librte_timer.so.24.0 00:02:55.751 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:55.751 [189/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:55.751 [190/710] Linking static target lib/acl/libavx2_tmp.a 00:02:55.751 [191/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.751 [192/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:55.751 [193/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:55.751 [194/710] Linking target lib/librte_net.so.24.0 00:02:55.751 [195/710] Linking target lib/librte_bbdev.so.24.0 00:02:56.009 [196/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:56.009 [197/710] Linking static target lib/acl/libavx512_tmp.a 00:02:56.009 [198/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:56.009 [199/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:56.009 [200/710] Linking target lib/librte_cmdline.so.24.0 00:02:56.267 [201/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.267 [202/710] Linking target lib/librte_hash.so.24.0 00:02:56.267 [203/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:56.267 [204/710] Linking static target lib/librte_acl.a 00:02:56.267 [205/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:56.267 [206/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:56.267 [207/710] Linking static target lib/librte_cfgfile.a 00:02:56.267 [208/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:56.525 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.525 [210/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:56.525 [211/710] Linking target lib/librte_acl.so.24.0 00:02:56.784 [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.784 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:56.784 [214/710] Linking target lib/librte_cfgfile.so.24.0 00:02:56.784 [215/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:56.784 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:57.042 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:57.042 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:57.301 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:57.301 [220/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:57.301 [221/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:57.301 [222/710] Linking static target lib/librte_compressdev.a 00:02:57.301 [223/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:57.559 [224/710] Linking static target lib/librte_bpf.a 00:02:57.559 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:57.559 [226/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:57.817 [227/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.817 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:57.817 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:57.817 [230/710] Linking static target lib/librte_distributor.a 00:02:57.817 [231/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.075 [232/710] Linking target lib/librte_compressdev.so.24.0 00:02:58.075 [233/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:58.075 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.075 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:58.334 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:58.334 [237/710] Linking static target lib/librte_dmadev.a 00:02:58.334 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:58.592 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.592 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:58.592 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:58.851 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:58.851 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:59.109 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:59.109 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:59.109 [246/710] Linking static target lib/librte_efd.a 00:02:59.368 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:59.368 [248/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:59.368 [249/710] Linking static target lib/librte_cryptodev.a 00:02:59.368 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.368 [251/710] Linking target lib/librte_efd.so.24.0 00:02:59.626 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.626 [253/710] Linking target lib/librte_ethdev.so.24.0 00:02:59.885 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:59.885 [255/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:59.885 [256/710] Linking static target lib/librte_dispatcher.a 00:02:59.885 [257/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:59.885 [258/710] Linking target lib/librte_metrics.so.24.0 00:03:00.143 [259/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:03:00.143 [260/710] Linking target lib/librte_bitratestats.so.24.0 00:03:00.143 [261/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:00.143 [262/710] Linking static target lib/librte_gpudev.a 00:03:00.143 [263/710] Linking target lib/librte_bpf.so.24.0 00:03:00.143 [264/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:00.143 [265/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:00.143 [266/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:03:00.143 [267/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.403 [268/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:00.661 [269/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:03:00.661 [270/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:00.661 [271/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.661 [272/710] Linking target lib/librte_cryptodev.so.24.0 00:03:00.920 [273/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:03:00.920 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.920 [275/710] Linking target lib/librte_gpudev.so.24.0 00:03:00.920 [276/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:01.178 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:01.178 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:01.178 [279/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:01.178 [280/710] Linking static target lib/librte_gro.a 00:03:01.178 [281/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:01.178 [282/710] Linking static target lib/librte_eventdev.a 00:03:01.178 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:01.178 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:01.178 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:01.437 [286/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.437 [287/710] Linking target lib/librte_gro.so.24.0 00:03:01.437 [288/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:01.437 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:01.437 [290/710] Linking static target lib/librte_gso.a 00:03:01.695 [291/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.695 [292/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:01.695 [293/710] Linking target lib/librte_gso.so.24.0 00:03:01.953 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:01.953 [295/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:01.953 [296/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:01.953 [297/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:01.953 [298/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:01.953 [299/710] Linking static target lib/librte_jobstats.a 00:03:02.231 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:02.231 [301/710] Linking static target lib/librte_ip_frag.a 00:03:02.231 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:02.231 [303/710] Linking static target lib/librte_latencystats.a 00:03:02.494 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.494 [305/710] Linking target lib/librte_jobstats.so.24.0 00:03:02.494 [306/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.494 [307/710] Linking target lib/librte_ip_frag.so.24.0 00:03:02.494 [308/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.494 [309/710] Linking target lib/librte_latencystats.so.24.0 00:03:02.494 [310/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:02.494 [311/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:02.494 [312/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:02.751 [313/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:03:02.751 [314/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:02.751 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:02.751 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:02.751 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:03.316 [318/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:03.316 [319/710] Linking static target lib/librte_lpm.a 00:03:03.316 [320/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.316 [321/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:03.316 [322/710] Linking target lib/librte_eventdev.so.24.0 00:03:03.316 [323/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:03.316 [324/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:03.316 [325/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:03:03.575 [326/710] Linking target lib/librte_dispatcher.so.24.0 00:03:03.575 [327/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.575 [328/710] Linking target lib/librte_lpm.so.24.0 00:03:03.575 [329/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:03.575 [330/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:03.575 [331/710] Linking static target lib/librte_pcapng.a 00:03:03.575 [332/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:03.575 [333/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:03.575 [334/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:03:03.834 [335/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.834 [336/710] Linking target lib/librte_pcapng.so.24.0 00:03:03.834 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:03:04.093 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:04.093 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:04.093 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:04.351 [341/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:04.351 [342/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:04.351 [343/710] Linking static target lib/librte_power.a 00:03:04.351 [344/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:04.351 [345/710] Linking static target lib/librte_member.a 00:03:04.351 [346/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:04.351 [347/710] Linking static target lib/librte_regexdev.a 00:03:04.351 [348/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:04.351 [349/710] Linking static target lib/librte_rawdev.a 00:03:04.609 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:04.610 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:04.610 [352/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:04.610 [353/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.868 [354/710] Linking target lib/librte_member.so.24.0 00:03:04.868 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:04.868 [356/710] Linking static target lib/librte_mldev.a 00:03:04.868 [357/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:04.868 [358/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.868 [359/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.868 [360/710] Linking target lib/librte_rawdev.so.24.0 00:03:04.868 [361/710] Linking target lib/librte_power.so.24.0 00:03:05.127 [362/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:05.127 [363/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.127 [364/710] Linking target lib/librte_regexdev.so.24.0 00:03:05.385 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:05.385 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:05.385 [367/710] Linking static target lib/librte_reorder.a 00:03:05.385 [368/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:05.385 [369/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:05.644 [370/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:05.644 [371/710] Linking static target lib/librte_rib.a 00:03:05.644 [372/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:05.644 [373/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:05.644 [374/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.902 [375/710] Linking target lib/librte_reorder.so.24.0 00:03:05.902 [376/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:05.902 [377/710] Linking static target lib/librte_stack.a 00:03:05.902 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:05.902 [379/710] Linking static target lib/librte_security.a 00:03:05.902 [380/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:03:05.902 [381/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.161 [382/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.161 [383/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.161 [384/710] Linking target lib/librte_rib.so.24.0 00:03:06.161 [385/710] Linking target lib/librte_mldev.so.24.0 00:03:06.161 [386/710] Linking target lib/librte_stack.so.24.0 00:03:06.161 [387/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:03:06.161 [388/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:06.161 [389/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.420 [390/710] Linking target lib/librte_security.so.24.0 00:03:06.420 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:06.420 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:03:06.420 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:06.678 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:06.678 [395/710] Linking static target lib/librte_sched.a 00:03:06.936 [396/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:06.936 [397/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.195 [398/710] Linking target lib/librte_sched.so.24.0 00:03:07.195 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:07.195 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:03:07.453 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:07.453 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:07.711 [403/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:07.711 [404/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:07.970 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:07.970 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:08.228 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:08.228 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:08.228 [409/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:08.228 [410/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:08.487 [411/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:08.487 [412/710] Linking static target lib/librte_ipsec.a 00:03:08.487 [413/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:08.746 [414/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.746 [415/710] Linking target lib/librte_ipsec.so.24.0 00:03:08.746 [416/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:08.746 [417/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:09.004 [418/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:09.004 [419/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:03:09.004 [420/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:09.004 [421/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:09.004 [422/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:09.004 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:09.938 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:09.938 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:09.938 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:09.938 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:09.938 [428/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:09.938 [429/710] Linking static target lib/librte_pdcp.a 00:03:09.938 [430/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:09.938 [431/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:09.938 [432/710] Linking static target lib/librte_fib.a 00:03:10.196 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.454 [434/710] Linking target lib/librte_pdcp.so.24.0 00:03:10.454 [435/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.454 [436/710] Linking target lib/librte_fib.so.24.0 00:03:10.454 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:11.021 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:11.021 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:11.021 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:11.021 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:11.279 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:11.279 [443/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:11.279 [444/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:11.537 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:11.537 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:11.537 [447/710] Linking static target lib/librte_port.a 00:03:11.795 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:11.795 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:12.053 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:12.053 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:12.053 [452/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.053 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:12.053 [454/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:12.053 [455/710] Linking target lib/librte_port.so.24.0 00:03:12.311 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:12.311 [457/710] Linking static target lib/librte_pdump.a 00:03:12.311 [458/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:03:12.311 [459/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:12.570 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.570 [461/710] Linking target lib/librte_pdump.so.24.0 00:03:12.570 [462/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:13.136 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:13.136 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:13.136 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:13.136 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:13.136 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:13.395 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:13.395 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:13.653 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:13.653 [471/710] Linking static target lib/librte_table.a 00:03:13.653 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:13.653 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:14.219 [474/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:14.219 [475/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.219 [476/710] Linking target lib/librte_table.so.24.0 00:03:14.478 [477/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:14.478 [478/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:03:14.478 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:14.736 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:14.994 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:15.252 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:15.252 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:15.252 [484/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:15.252 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:15.252 [486/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:15.826 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:15.826 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:15.826 [489/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:15.826 [490/710] Linking static target lib/librte_graph.a 00:03:16.101 [491/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:16.101 [492/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:16.359 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:16.617 [494/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.617 [495/710] Linking target lib/librte_graph.so.24.0 00:03:16.617 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:16.617 [497/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:16.617 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:03:16.875 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:17.134 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:17.134 [501/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:17.392 [502/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:17.392 [503/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:17.392 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:17.392 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:17.392 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:17.651 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:17.651 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:17.909 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:18.167 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:18.167 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:18.167 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:18.167 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:18.167 [514/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:18.167 [515/710] Linking static target lib/librte_node.a 00:03:18.734 [516/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:18.734 [517/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:18.734 [518/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.734 [519/710] Linking target lib/librte_node.so.24.0 00:03:18.734 [520/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:18.734 [521/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:18.734 [522/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:18.734 [523/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:18.734 [524/710] Linking static target drivers/librte_bus_vdev.a 00:03:18.992 [525/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:18.992 [526/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.992 [527/710] Linking static target drivers/librte_bus_pci.a 00:03:18.992 [528/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.992 [529/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:18.992 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:19.250 [531/710] Linking target drivers/librte_bus_vdev.so.24.0 00:03:19.250 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:19.250 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:19.250 [534/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:03:19.250 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:19.508 [536/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:19.508 [537/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:19.508 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.508 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:03:19.767 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:19.767 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.767 [542/710] Linking static target drivers/librte_mempool_ring.a 00:03:19.767 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:19.767 [544/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:03:19.767 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:03:19.767 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:20.333 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:20.591 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:20.591 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:20.591 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:20.591 [551/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:21.525 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:21.525 [553/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:21.525 [554/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:21.525 [555/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:21.783 [556/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:21.783 [557/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:22.350 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:22.350 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:22.350 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:22.608 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:22.608 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:23.173 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:23.173 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:23.173 [565/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:23.431 [566/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:23.688 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:23.946 [568/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:23.946 [569/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:23.946 [570/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:23.946 [571/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:23.946 [572/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:24.204 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:24.462 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:24.462 [575/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:24.462 [576/710] Linking static target lib/librte_vhost.a 00:03:24.462 [577/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:24.462 [578/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:24.721 [579/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:24.721 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:24.721 [581/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:24.721 [582/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:25.287 [583/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:25.287 [584/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.287 [585/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:25.287 [586/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:25.287 [587/710] Linking static target drivers/librte_net_i40e.a 00:03:25.287 [588/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:25.287 [589/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:25.287 [590/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:25.287 [591/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:25.545 [592/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:25.545 [593/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.803 [594/710] Linking target lib/librte_vhost.so.24.0 00:03:25.803 [595/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.803 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:26.061 [597/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:26.061 [598/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:26.061 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:26.627 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:26.627 [601/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:26.627 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:26.627 [603/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:26.885 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:26.885 [605/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:26.885 [606/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:27.142 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:27.707 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:27.707 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:27.707 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:27.707 [611/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:27.707 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:27.707 [613/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:27.965 [614/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:27.965 [615/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:27.965 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:27.965 [617/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:28.223 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:28.495 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:28.778 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:28.778 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:28.778 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:29.036 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:29.602 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:29.861 [625/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:29.861 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:29.861 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:30.119 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:30.119 [629/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:30.119 [630/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:30.119 [631/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:30.119 [632/710] Linking static target lib/librte_pipeline.a 00:03:30.119 [633/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:30.378 [634/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:30.636 [635/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:30.636 [636/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:30.636 [637/710] Linking target app/dpdk-dumpcap 00:03:30.636 [638/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:30.894 [639/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:30.894 [640/710] Linking target app/dpdk-graph 00:03:31.153 [641/710] Linking target app/dpdk-test-acl 00:03:31.153 [642/710] Linking target app/dpdk-pdump 00:03:31.153 [643/710] Linking target app/dpdk-proc-info 00:03:31.153 [644/710] Linking target app/dpdk-test-cmdline 00:03:31.153 [645/710] Linking target app/dpdk-test-compress-perf 00:03:31.411 [646/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:31.411 [647/710] Linking target app/dpdk-test-crypto-perf 00:03:31.411 [648/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:31.669 [649/710] Linking target app/dpdk-test-dma-perf 00:03:31.669 [650/710] Linking target app/dpdk-test-fib 00:03:31.669 [651/710] Linking target app/dpdk-test-gpudev 00:03:31.669 [652/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:31.927 [653/710] Linking target app/dpdk-test-flow-perf 00:03:31.927 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:31.927 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:31.927 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:32.186 [657/710] Linking target app/dpdk-test-eventdev 00:03:32.186 [658/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:32.186 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:32.444 [660/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:32.444 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:32.708 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:32.708 [663/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:32.708 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:32.708 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:32.708 [666/710] Linking target app/dpdk-test-bbdev 00:03:32.966 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:33.224 [668/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:33.224 [669/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:33.224 [670/710] Linking target lib/librte_pipeline.so.24.0 00:03:33.224 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:33.224 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:33.224 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:33.482 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:33.741 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:33.741 [676/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:33.741 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:33.999 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:34.257 [679/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:34.257 [680/710] Linking target app/dpdk-test-pipeline 00:03:34.257 [681/710] Linking target app/dpdk-test-mldev 00:03:34.257 [682/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:34.516 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:35.082 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:35.082 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:35.082 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:35.082 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:35.082 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:35.340 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:35.340 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:35.907 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:35.907 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:35.907 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:36.165 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:36.423 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:36.423 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:36.682 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:36.940 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:36.940 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:36.940 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:37.198 [701/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:37.198 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:37.198 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:37.198 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:37.457 [705/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:37.457 [706/710] Linking target app/dpdk-test-regex 00:03:37.457 [707/710] Linking target app/dpdk-test-sad 00:03:37.715 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:37.974 [709/710] Linking target app/dpdk-testpmd 00:03:38.233 [710/710] Linking target app/dpdk-test-security-perf 00:03:38.233 15:58:44 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:38.233 15:58:44 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:38.233 15:58:44 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:38.492 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:38.492 [0/1] Installing files. 00:03:38.754 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:38.754 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:38.754 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:38.755 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:38.756 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.757 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:38.758 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:38.759 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:38.759 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:38.759 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.018 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.019 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.281 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.281 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.281 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.281 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.281 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.281 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.281 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.281 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.281 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.281 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:39.281 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.281 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.282 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.283 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:39.284 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:39.284 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:39.284 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:39.284 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:39.284 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:39.284 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:39.284 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:39.284 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:39.284 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:39.284 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:39.284 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:39.284 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:39.284 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:39.284 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:39.284 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:39.284 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:39.284 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:39.284 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:39.284 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:39.284 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:39.284 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:39.284 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:39.284 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:39.284 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:39.284 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:39.284 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:39.284 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:39.284 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:39.284 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:39.284 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:39.284 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:39.284 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:39.284 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:39.284 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:39.284 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:39.284 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:39.284 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:39.284 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:39.284 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:39.284 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:39.284 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:39.284 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:39.284 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:39.284 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:39.284 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:39.284 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:39.284 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:39.284 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:39.284 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:39.284 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:39.284 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:39.285 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:39.285 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:39.285 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:39.285 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:39.285 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:39.285 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:39.285 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:39.285 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:39.285 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:39.285 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:39.285 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:39.285 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:39.285 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:39.285 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:39.285 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:39.285 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:39.285 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:39.285 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:39.285 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:39.285 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:39.285 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:39.285 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:39.285 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:39.285 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:39.285 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:39.285 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:39.285 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:39.285 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:39.285 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:39.285 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:39.285 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:39.285 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:39.285 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:39.285 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:39.285 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:39.285 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:39.285 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:39.285 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:39.285 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:39.285 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:39.285 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:39.285 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:39.285 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:39.285 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:39.285 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:39.285 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:39.285 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:39.285 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:39.285 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:39.285 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:39.285 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:39.285 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:39.285 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:39.285 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:39.285 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:39.285 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:39.285 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:39.285 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:39.285 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:39.285 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:39.285 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:39.285 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:39.285 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:39.285 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:39.285 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:39.285 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:39.285 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:39.285 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:39.285 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:39.285 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:39.285 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:39.285 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:39.285 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:39.285 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:39.285 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:39.285 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:39.285 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:39.285 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:39.285 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:39.285 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:39.285 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:39.285 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:39.285 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:39.285 15:58:45 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:39.285 15:58:45 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:39.285 00:03:39.285 real 1m2.637s 00:03:39.285 user 7m40.588s 00:03:39.285 sys 1m6.121s 00:03:39.285 15:58:45 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:39.285 15:58:45 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:39.285 ************************************ 00:03:39.285 END TEST build_native_dpdk 00:03:39.285 ************************************ 00:03:39.545 15:58:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:39.545 15:58:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:39.545 15:58:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:39.545 15:58:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:39.545 15:58:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:39.545 15:58:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:39.545 15:58:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:39.545 15:58:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:39.545 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:39.803 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:39.803 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:39.803 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:40.062 Using 'verbs' RDMA provider 00:03:53.705 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:08.585 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:08.585 Creating mk/config.mk...done. 00:04:08.585 Creating mk/cc.flags.mk...done. 00:04:08.585 Type 'make' to build. 00:04:08.585 15:59:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:08.585 15:59:13 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:08.585 15:59:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:08.585 15:59:13 -- common/autotest_common.sh@10 -- $ set +x 00:04:08.585 ************************************ 00:04:08.585 START TEST make 00:04:08.585 ************************************ 00:04:08.585 15:59:13 make -- common/autotest_common.sh@1129 -- $ make -j10 00:04:08.585 make[1]: Nothing to be done for 'all'. 00:04:08.585 The Meson build system 00:04:08.585 Version: 1.5.0 00:04:08.585 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:04:08.585 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:08.585 Build type: native build 00:04:08.585 Project name: libvfio-user 00:04:08.585 Project version: 0.0.1 00:04:08.585 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:08.585 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:08.585 Host machine cpu family: x86_64 00:04:08.585 Host machine cpu: x86_64 00:04:08.585 Run-time dependency threads found: YES 00:04:08.585 Library dl found: YES 00:04:08.585 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:08.585 Run-time dependency json-c found: YES 0.17 00:04:08.585 Run-time dependency cmocka found: YES 1.1.7 00:04:08.585 Program pytest-3 found: NO 00:04:08.585 Program flake8 found: NO 00:04:08.585 Program misspell-fixer found: NO 00:04:08.585 Program restructuredtext-lint found: NO 00:04:08.585 Program valgrind found: YES (/usr/bin/valgrind) 00:04:08.585 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:08.585 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:08.585 Compiler for C supports arguments -Wwrite-strings: YES 00:04:08.585 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:08.585 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:04:08.585 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:04:08.585 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:08.585 Build targets in project: 8 00:04:08.585 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:08.585 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:08.585 00:04:08.585 libvfio-user 0.0.1 00:04:08.585 00:04:08.585 User defined options 00:04:08.585 buildtype : debug 00:04:08.585 default_library: shared 00:04:08.585 libdir : /usr/local/lib 00:04:08.585 00:04:08.585 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:09.153 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:09.153 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:09.153 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:09.153 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:09.153 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:09.153 [5/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:09.153 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:09.153 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:09.153 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:09.153 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:09.153 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:09.153 [11/37] Compiling C object samples/null.p/null.c.o 00:04:09.413 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:09.413 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:09.413 [14/37] Compiling C object samples/client.p/client.c.o 00:04:09.413 [15/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:09.413 [16/37] Compiling C object samples/server.p/server.c.o 00:04:09.413 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:09.413 [18/37] Linking target samples/client 00:04:09.413 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:09.413 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:09.413 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:09.413 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:09.413 [23/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:09.413 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:09.413 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:09.413 [26/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:09.413 [27/37] Linking target lib/libvfio-user.so.0.0.1 00:04:09.413 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:09.413 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:09.672 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:09.672 [31/37] Linking target test/unit_tests 00:04:09.672 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:09.672 [33/37] Linking target samples/server 00:04:09.672 [34/37] Linking target samples/gpio-pci-idio-16 00:04:09.672 [35/37] Linking target samples/lspci 00:04:09.672 [36/37] Linking target samples/null 00:04:09.672 [37/37] Linking target samples/shadow_ioeventfd_server 00:04:09.672 INFO: autodetecting backend as ninja 00:04:09.672 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:09.672 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:04:10.239 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:04:10.239 ninja: no work to do. 00:05:06.494 CC lib/ut_mock/mock.o 00:05:06.494 CC lib/log/log.o 00:05:06.494 CC lib/log/log_flags.o 00:05:06.494 CC lib/log/log_deprecated.o 00:05:06.494 CC lib/ut/ut.o 00:05:06.494 LIB libspdk_log.a 00:05:06.494 LIB libspdk_ut.a 00:05:06.494 LIB libspdk_ut_mock.a 00:05:06.494 SO libspdk_ut.so.2.0 00:05:06.494 SO libspdk_ut_mock.so.6.0 00:05:06.494 SO libspdk_log.so.7.1 00:05:06.494 SYMLINK libspdk_ut.so 00:05:06.494 SYMLINK libspdk_ut_mock.so 00:05:06.494 SYMLINK libspdk_log.so 00:05:06.494 CC lib/util/base64.o 00:05:06.494 CC lib/util/bit_array.o 00:05:06.494 CXX lib/trace_parser/trace.o 00:05:06.494 CC lib/util/cpuset.o 00:05:06.494 CC lib/util/crc16.o 00:05:06.494 CC lib/ioat/ioat.o 00:05:06.494 CC lib/util/crc32.o 00:05:06.494 CC lib/dma/dma.o 00:05:06.494 CC lib/util/crc32c.o 00:05:06.494 CC lib/vfio_user/host/vfio_user_pci.o 00:05:06.494 CC lib/vfio_user/host/vfio_user.o 00:05:06.494 CC lib/util/crc32_ieee.o 00:05:06.494 CC lib/util/crc64.o 00:05:06.494 CC lib/util/dif.o 00:05:06.494 CC lib/util/fd.o 00:05:06.494 LIB libspdk_dma.a 00:05:06.494 SO libspdk_dma.so.5.0 00:05:06.494 CC lib/util/fd_group.o 00:05:06.494 SYMLINK libspdk_dma.so 00:05:06.494 CC lib/util/file.o 00:05:06.494 CC lib/util/hexlify.o 00:05:06.494 CC lib/util/iov.o 00:05:06.494 LIB libspdk_ioat.a 00:05:06.494 CC lib/util/math.o 00:05:06.494 SO libspdk_ioat.so.7.0 00:05:06.494 LIB libspdk_vfio_user.a 00:05:06.494 CC lib/util/net.o 00:05:06.494 SO libspdk_vfio_user.so.5.0 00:05:06.494 SYMLINK libspdk_ioat.so 00:05:06.494 CC lib/util/pipe.o 00:05:06.494 SYMLINK libspdk_vfio_user.so 00:05:06.494 CC lib/util/strerror_tls.o 00:05:06.494 CC lib/util/string.o 00:05:06.494 CC lib/util/uuid.o 00:05:06.494 CC lib/util/xor.o 00:05:06.494 CC lib/util/zipf.o 00:05:06.494 CC lib/util/md5.o 00:05:06.494 LIB libspdk_util.a 00:05:06.494 SO libspdk_util.so.10.1 00:05:06.494 SYMLINK libspdk_util.so 00:05:06.494 LIB libspdk_trace_parser.a 00:05:06.494 SO libspdk_trace_parser.so.6.0 00:05:06.494 SYMLINK libspdk_trace_parser.so 00:05:06.494 CC lib/rdma_utils/rdma_utils.o 00:05:06.494 CC lib/conf/conf.o 00:05:06.494 CC lib/env_dpdk/env.o 00:05:06.494 CC lib/idxd/idxd.o 00:05:06.494 CC lib/env_dpdk/memory.o 00:05:06.494 CC lib/vmd/vmd.o 00:05:06.494 CC lib/env_dpdk/pci.o 00:05:06.494 CC lib/idxd/idxd_user.o 00:05:06.494 CC lib/env_dpdk/init.o 00:05:06.494 CC lib/json/json_parse.o 00:05:06.494 LIB libspdk_conf.a 00:05:06.494 CC lib/json/json_util.o 00:05:06.494 SO libspdk_conf.so.6.0 00:05:06.494 CC lib/json/json_write.o 00:05:06.494 LIB libspdk_rdma_utils.a 00:05:06.494 SYMLINK libspdk_conf.so 00:05:06.494 CC lib/idxd/idxd_kernel.o 00:05:06.494 SO libspdk_rdma_utils.so.1.0 00:05:06.494 CC lib/vmd/led.o 00:05:06.494 SYMLINK libspdk_rdma_utils.so 00:05:06.494 CC lib/env_dpdk/threads.o 00:05:06.494 CC lib/env_dpdk/pci_ioat.o 00:05:06.494 CC lib/env_dpdk/pci_virtio.o 00:05:06.494 CC lib/env_dpdk/pci_vmd.o 00:05:06.494 CC lib/env_dpdk/pci_idxd.o 00:05:06.494 CC lib/env_dpdk/pci_event.o 00:05:06.494 LIB libspdk_json.a 00:05:06.494 SO libspdk_json.so.6.0 00:05:06.494 LIB libspdk_idxd.a 00:05:06.494 LIB libspdk_vmd.a 00:05:06.494 CC lib/env_dpdk/sigbus_handler.o 00:05:06.494 CC lib/env_dpdk/pci_dpdk.o 00:05:06.494 SO libspdk_idxd.so.12.1 00:05:06.494 SYMLINK libspdk_json.so 00:05:06.494 SO libspdk_vmd.so.6.0 00:05:06.494 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:06.494 SYMLINK libspdk_idxd.so 00:05:06.494 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:06.494 SYMLINK libspdk_vmd.so 00:05:06.494 CC lib/rdma_provider/common.o 00:05:06.494 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:06.494 CC lib/jsonrpc/jsonrpc_server.o 00:05:06.495 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:06.495 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:06.495 CC lib/jsonrpc/jsonrpc_client.o 00:05:06.495 LIB libspdk_rdma_provider.a 00:05:06.495 SO libspdk_rdma_provider.so.7.0 00:05:06.495 SYMLINK libspdk_rdma_provider.so 00:05:06.495 LIB libspdk_jsonrpc.a 00:05:06.495 SO libspdk_jsonrpc.so.6.0 00:05:06.495 SYMLINK libspdk_jsonrpc.so 00:05:06.495 LIB libspdk_env_dpdk.a 00:05:06.495 CC lib/rpc/rpc.o 00:05:06.495 SO libspdk_env_dpdk.so.15.1 00:05:06.495 SYMLINK libspdk_env_dpdk.so 00:05:06.495 LIB libspdk_rpc.a 00:05:06.495 SO libspdk_rpc.so.6.0 00:05:06.495 SYMLINK libspdk_rpc.so 00:05:06.495 CC lib/notify/notify.o 00:05:06.495 CC lib/trace/trace.o 00:05:06.495 CC lib/trace/trace_flags.o 00:05:06.495 CC lib/notify/notify_rpc.o 00:05:06.495 CC lib/trace/trace_rpc.o 00:05:06.495 CC lib/keyring/keyring.o 00:05:06.495 CC lib/keyring/keyring_rpc.o 00:05:06.495 LIB libspdk_notify.a 00:05:06.495 SO libspdk_notify.so.6.0 00:05:06.495 LIB libspdk_trace.a 00:05:06.495 LIB libspdk_keyring.a 00:05:06.495 SYMLINK libspdk_notify.so 00:05:06.495 SO libspdk_keyring.so.2.0 00:05:06.495 SO libspdk_trace.so.11.0 00:05:06.495 SYMLINK libspdk_keyring.so 00:05:06.495 SYMLINK libspdk_trace.so 00:05:06.495 CC lib/thread/thread.o 00:05:06.495 CC lib/thread/iobuf.o 00:05:06.495 CC lib/sock/sock.o 00:05:06.495 CC lib/sock/sock_rpc.o 00:05:06.495 LIB libspdk_sock.a 00:05:06.495 SO libspdk_sock.so.10.0 00:05:06.495 SYMLINK libspdk_sock.so 00:05:06.495 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:06.495 CC lib/nvme/nvme_ctrlr.o 00:05:06.495 CC lib/nvme/nvme_fabric.o 00:05:06.495 CC lib/nvme/nvme_ns_cmd.o 00:05:06.495 CC lib/nvme/nvme_pcie_common.o 00:05:06.495 CC lib/nvme/nvme_ns.o 00:05:06.495 CC lib/nvme/nvme_pcie.o 00:05:06.495 CC lib/nvme/nvme.o 00:05:06.495 CC lib/nvme/nvme_qpair.o 00:05:06.495 LIB libspdk_thread.a 00:05:06.495 SO libspdk_thread.so.11.0 00:05:06.495 CC lib/nvme/nvme_quirks.o 00:05:06.495 SYMLINK libspdk_thread.so 00:05:06.495 CC lib/nvme/nvme_transport.o 00:05:06.495 CC lib/nvme/nvme_discovery.o 00:05:06.495 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:06.495 CC lib/accel/accel.o 00:05:06.495 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:06.495 CC lib/nvme/nvme_tcp.o 00:05:06.495 CC lib/nvme/nvme_opal.o 00:05:06.495 CC lib/accel/accel_rpc.o 00:05:06.495 CC lib/nvme/nvme_io_msg.o 00:05:06.495 CC lib/nvme/nvme_poll_group.o 00:05:06.495 CC lib/blob/blobstore.o 00:05:06.495 CC lib/init/json_config.o 00:05:06.495 CC lib/blob/request.o 00:05:06.495 CC lib/blob/zeroes.o 00:05:06.495 CC lib/blob/blob_bs_dev.o 00:05:06.495 CC lib/accel/accel_sw.o 00:05:06.495 CC lib/init/subsystem.o 00:05:06.763 CC lib/nvme/nvme_zns.o 00:05:06.763 CC lib/init/subsystem_rpc.o 00:05:06.763 CC lib/virtio/virtio.o 00:05:06.763 CC lib/virtio/virtio_vhost_user.o 00:05:07.021 LIB libspdk_accel.a 00:05:07.021 CC lib/init/rpc.o 00:05:07.021 SO libspdk_accel.so.16.0 00:05:07.021 CC lib/nvme/nvme_stubs.o 00:05:07.021 CC lib/virtio/virtio_vfio_user.o 00:05:07.021 SYMLINK libspdk_accel.so 00:05:07.021 CC lib/virtio/virtio_pci.o 00:05:07.021 CC lib/nvme/nvme_auth.o 00:05:07.021 CC lib/nvme/nvme_cuse.o 00:05:07.021 LIB libspdk_init.a 00:05:07.279 CC lib/nvme/nvme_vfio_user.o 00:05:07.279 SO libspdk_init.so.6.0 00:05:07.279 CC lib/nvme/nvme_rdma.o 00:05:07.279 SYMLINK libspdk_init.so 00:05:07.279 LIB libspdk_virtio.a 00:05:07.280 SO libspdk_virtio.so.7.0 00:05:07.538 SYMLINK libspdk_virtio.so 00:05:07.538 CC lib/vfu_tgt/tgt_endpoint.o 00:05:07.538 CC lib/fsdev/fsdev.o 00:05:07.538 CC lib/bdev/bdev.o 00:05:07.538 CC lib/fsdev/fsdev_io.o 00:05:07.538 CC lib/event/app.o 00:05:07.796 CC lib/vfu_tgt/tgt_rpc.o 00:05:07.796 CC lib/bdev/bdev_rpc.o 00:05:08.054 CC lib/fsdev/fsdev_rpc.o 00:05:08.054 LIB libspdk_vfu_tgt.a 00:05:08.054 SO libspdk_vfu_tgt.so.3.0 00:05:08.054 CC lib/event/reactor.o 00:05:08.054 CC lib/bdev/bdev_zone.o 00:05:08.054 SYMLINK libspdk_vfu_tgt.so 00:05:08.054 CC lib/bdev/part.o 00:05:08.054 CC lib/bdev/scsi_nvme.o 00:05:08.054 CC lib/event/log_rpc.o 00:05:08.312 CC lib/event/app_rpc.o 00:05:08.312 LIB libspdk_fsdev.a 00:05:08.312 SO libspdk_fsdev.so.2.0 00:05:08.312 CC lib/event/scheduler_static.o 00:05:08.312 SYMLINK libspdk_fsdev.so 00:05:08.571 LIB libspdk_event.a 00:05:08.571 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:08.571 SO libspdk_event.so.14.0 00:05:08.571 SYMLINK libspdk_event.so 00:05:08.571 LIB libspdk_nvme.a 00:05:08.829 SO libspdk_nvme.so.15.0 00:05:09.088 SYMLINK libspdk_nvme.so 00:05:09.347 LIB libspdk_fuse_dispatcher.a 00:05:09.347 SO libspdk_fuse_dispatcher.so.1.0 00:05:09.347 SYMLINK libspdk_fuse_dispatcher.so 00:05:09.347 LIB libspdk_blob.a 00:05:09.605 SO libspdk_blob.so.11.0 00:05:09.605 SYMLINK libspdk_blob.so 00:05:09.864 CC lib/blobfs/blobfs.o 00:05:09.864 CC lib/blobfs/tree.o 00:05:09.864 CC lib/lvol/lvol.o 00:05:10.431 LIB libspdk_bdev.a 00:05:10.431 SO libspdk_bdev.so.17.0 00:05:10.431 SYMLINK libspdk_bdev.so 00:05:10.690 LIB libspdk_blobfs.a 00:05:10.690 CC lib/ftl/ftl_core.o 00:05:10.690 CC lib/ftl/ftl_init.o 00:05:10.690 CC lib/scsi/dev.o 00:05:10.690 CC lib/ftl/ftl_debug.o 00:05:10.690 CC lib/ftl/ftl_layout.o 00:05:10.690 CC lib/nvmf/ctrlr.o 00:05:10.690 CC lib/ublk/ublk.o 00:05:10.690 CC lib/nbd/nbd.o 00:05:10.690 SO libspdk_blobfs.so.10.0 00:05:10.948 SYMLINK libspdk_blobfs.so 00:05:10.948 CC lib/ublk/ublk_rpc.o 00:05:10.948 LIB libspdk_lvol.a 00:05:10.948 SO libspdk_lvol.so.10.0 00:05:10.948 CC lib/ftl/ftl_io.o 00:05:10.948 CC lib/scsi/lun.o 00:05:10.948 SYMLINK libspdk_lvol.so 00:05:10.948 CC lib/nvmf/ctrlr_discovery.o 00:05:10.948 CC lib/nbd/nbd_rpc.o 00:05:11.207 CC lib/scsi/port.o 00:05:11.207 CC lib/ftl/ftl_sb.o 00:05:11.207 CC lib/ftl/ftl_l2p.o 00:05:11.207 CC lib/nvmf/ctrlr_bdev.o 00:05:11.207 LIB libspdk_nbd.a 00:05:11.207 SO libspdk_nbd.so.7.0 00:05:11.207 CC lib/ftl/ftl_l2p_flat.o 00:05:11.207 CC lib/ftl/ftl_nv_cache.o 00:05:11.207 SYMLINK libspdk_nbd.so 00:05:11.207 CC lib/ftl/ftl_band.o 00:05:11.207 CC lib/scsi/scsi.o 00:05:11.465 CC lib/scsi/scsi_bdev.o 00:05:11.465 CC lib/scsi/scsi_pr.o 00:05:11.465 LIB libspdk_ublk.a 00:05:11.465 SO libspdk_ublk.so.3.0 00:05:11.465 CC lib/scsi/scsi_rpc.o 00:05:11.465 CC lib/scsi/task.o 00:05:11.465 CC lib/ftl/ftl_band_ops.o 00:05:11.465 SYMLINK libspdk_ublk.so 00:05:11.465 CC lib/nvmf/subsystem.o 00:05:11.724 CC lib/nvmf/nvmf.o 00:05:11.724 CC lib/ftl/ftl_writer.o 00:05:11.724 CC lib/ftl/ftl_rq.o 00:05:11.724 CC lib/nvmf/nvmf_rpc.o 00:05:11.982 LIB libspdk_scsi.a 00:05:11.982 CC lib/nvmf/transport.o 00:05:11.982 SO libspdk_scsi.so.9.0 00:05:11.982 CC lib/ftl/ftl_reloc.o 00:05:11.982 CC lib/nvmf/tcp.o 00:05:11.982 CC lib/nvmf/stubs.o 00:05:11.982 SYMLINK libspdk_scsi.so 00:05:11.982 CC lib/ftl/ftl_l2p_cache.o 00:05:12.240 CC lib/nvmf/mdns_server.o 00:05:12.499 CC lib/iscsi/conn.o 00:05:12.499 CC lib/iscsi/init_grp.o 00:05:12.499 CC lib/iscsi/iscsi.o 00:05:12.499 CC lib/nvmf/vfio_user.o 00:05:12.499 CC lib/ftl/ftl_p2l.o 00:05:12.757 CC lib/ftl/ftl_p2l_log.o 00:05:12.757 CC lib/iscsi/param.o 00:05:12.757 CC lib/iscsi/portal_grp.o 00:05:12.757 CC lib/nvmf/rdma.o 00:05:13.015 CC lib/nvmf/auth.o 00:05:13.015 CC lib/ftl/mngt/ftl_mngt.o 00:05:13.015 CC lib/vhost/vhost.o 00:05:13.015 CC lib/iscsi/tgt_node.o 00:05:13.015 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:13.015 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:13.272 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:13.272 CC lib/iscsi/iscsi_subsystem.o 00:05:13.272 CC lib/iscsi/iscsi_rpc.o 00:05:13.531 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:13.531 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:13.531 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:13.789 CC lib/iscsi/task.o 00:05:13.789 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:13.789 CC lib/vhost/vhost_rpc.o 00:05:13.789 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:13.789 CC lib/vhost/vhost_scsi.o 00:05:13.789 CC lib/vhost/vhost_blk.o 00:05:14.047 CC lib/vhost/rte_vhost_user.o 00:05:14.047 LIB libspdk_iscsi.a 00:05:14.047 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:14.047 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:14.047 SO libspdk_iscsi.so.8.0 00:05:14.305 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:14.305 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:14.305 CC lib/ftl/utils/ftl_conf.o 00:05:14.305 CC lib/ftl/utils/ftl_md.o 00:05:14.305 SYMLINK libspdk_iscsi.so 00:05:14.305 CC lib/ftl/utils/ftl_mempool.o 00:05:14.562 CC lib/ftl/utils/ftl_bitmap.o 00:05:14.562 CC lib/ftl/utils/ftl_property.o 00:05:14.562 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:14.562 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:14.562 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:14.562 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:14.821 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:14.821 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:14.821 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:14.821 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:14.821 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:14.821 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:14.821 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:14.821 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:15.080 LIB libspdk_nvmf.a 00:05:15.080 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:15.080 CC lib/ftl/base/ftl_base_dev.o 00:05:15.080 CC lib/ftl/base/ftl_base_bdev.o 00:05:15.080 CC lib/ftl/ftl_trace.o 00:05:15.080 SO libspdk_nvmf.so.20.0 00:05:15.080 LIB libspdk_vhost.a 00:05:15.338 SO libspdk_vhost.so.8.0 00:05:15.338 SYMLINK libspdk_vhost.so 00:05:15.338 SYMLINK libspdk_nvmf.so 00:05:15.338 LIB libspdk_ftl.a 00:05:15.597 SO libspdk_ftl.so.9.0 00:05:15.855 SYMLINK libspdk_ftl.so 00:05:16.114 CC module/vfu_device/vfu_virtio.o 00:05:16.114 CC module/env_dpdk/env_dpdk_rpc.o 00:05:16.372 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:16.372 CC module/fsdev/aio/fsdev_aio.o 00:05:16.372 CC module/blob/bdev/blob_bdev.o 00:05:16.372 CC module/accel/dsa/accel_dsa.o 00:05:16.372 CC module/accel/error/accel_error.o 00:05:16.372 CC module/accel/ioat/accel_ioat.o 00:05:16.372 CC module/sock/posix/posix.o 00:05:16.372 CC module/keyring/file/keyring.o 00:05:16.372 LIB libspdk_env_dpdk_rpc.a 00:05:16.372 SO libspdk_env_dpdk_rpc.so.6.0 00:05:16.372 SYMLINK libspdk_env_dpdk_rpc.so 00:05:16.372 CC module/accel/dsa/accel_dsa_rpc.o 00:05:16.372 CC module/keyring/file/keyring_rpc.o 00:05:16.372 CC module/accel/ioat/accel_ioat_rpc.o 00:05:16.372 LIB libspdk_scheduler_dynamic.a 00:05:16.372 CC module/accel/error/accel_error_rpc.o 00:05:16.630 SO libspdk_scheduler_dynamic.so.4.0 00:05:16.630 LIB libspdk_blob_bdev.a 00:05:16.630 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:16.630 SYMLINK libspdk_scheduler_dynamic.so 00:05:16.630 SO libspdk_blob_bdev.so.11.0 00:05:16.630 LIB libspdk_accel_dsa.a 00:05:16.630 LIB libspdk_keyring_file.a 00:05:16.630 SO libspdk_accel_dsa.so.5.0 00:05:16.630 LIB libspdk_accel_error.a 00:05:16.630 SYMLINK libspdk_blob_bdev.so 00:05:16.630 SO libspdk_keyring_file.so.2.0 00:05:16.630 LIB libspdk_accel_ioat.a 00:05:16.630 CC module/fsdev/aio/linux_aio_mgr.o 00:05:16.630 SO libspdk_accel_error.so.2.0 00:05:16.630 SO libspdk_accel_ioat.so.6.0 00:05:16.630 SYMLINK libspdk_accel_dsa.so 00:05:16.630 SYMLINK libspdk_keyring_file.so 00:05:16.889 SYMLINK libspdk_accel_error.so 00:05:16.889 CC module/vfu_device/vfu_virtio_blk.o 00:05:16.889 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:16.889 SYMLINK libspdk_accel_ioat.so 00:05:16.889 CC module/vfu_device/vfu_virtio_scsi.o 00:05:16.889 CC module/vfu_device/vfu_virtio_rpc.o 00:05:16.889 CC module/keyring/linux/keyring.o 00:05:16.889 CC module/accel/iaa/accel_iaa.o 00:05:16.889 CC module/sock/uring/uring.o 00:05:16.889 LIB libspdk_fsdev_aio.a 00:05:16.889 LIB libspdk_scheduler_dpdk_governor.a 00:05:16.889 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:16.889 SO libspdk_fsdev_aio.so.1.0 00:05:17.148 LIB libspdk_sock_posix.a 00:05:17.148 CC module/accel/iaa/accel_iaa_rpc.o 00:05:17.148 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:17.148 CC module/scheduler/gscheduler/gscheduler.o 00:05:17.148 SO libspdk_sock_posix.so.6.0 00:05:17.148 SYMLINK libspdk_fsdev_aio.so 00:05:17.148 CC module/keyring/linux/keyring_rpc.o 00:05:17.148 CC module/vfu_device/vfu_virtio_fs.o 00:05:17.148 SYMLINK libspdk_sock_posix.so 00:05:17.148 LIB libspdk_accel_iaa.a 00:05:17.148 LIB libspdk_scheduler_gscheduler.a 00:05:17.148 SO libspdk_accel_iaa.so.3.0 00:05:17.148 LIB libspdk_keyring_linux.a 00:05:17.148 SO libspdk_scheduler_gscheduler.so.4.0 00:05:17.406 SO libspdk_keyring_linux.so.1.0 00:05:17.406 SYMLINK libspdk_accel_iaa.so 00:05:17.406 SYMLINK libspdk_scheduler_gscheduler.so 00:05:17.406 CC module/bdev/delay/vbdev_delay.o 00:05:17.406 CC module/blobfs/bdev/blobfs_bdev.o 00:05:17.406 SYMLINK libspdk_keyring_linux.so 00:05:17.406 CC module/bdev/gpt/gpt.o 00:05:17.406 CC module/bdev/error/vbdev_error.o 00:05:17.406 CC module/bdev/error/vbdev_error_rpc.o 00:05:17.406 CC module/bdev/lvol/vbdev_lvol.o 00:05:17.406 LIB libspdk_vfu_device.a 00:05:17.406 SO libspdk_vfu_device.so.3.0 00:05:17.406 CC module/bdev/malloc/bdev_malloc.o 00:05:17.406 CC module/bdev/null/bdev_null.o 00:05:17.664 SYMLINK libspdk_vfu_device.so 00:05:17.664 CC module/bdev/null/bdev_null_rpc.o 00:05:17.664 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:17.664 CC module/bdev/gpt/vbdev_gpt.o 00:05:17.664 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:17.664 LIB libspdk_bdev_error.a 00:05:17.664 SO libspdk_bdev_error.so.6.0 00:05:17.664 LIB libspdk_sock_uring.a 00:05:17.664 SO libspdk_sock_uring.so.5.0 00:05:17.664 SYMLINK libspdk_bdev_error.so 00:05:17.664 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:17.664 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:17.664 LIB libspdk_blobfs_bdev.a 00:05:17.664 LIB libspdk_bdev_null.a 00:05:17.921 SYMLINK libspdk_sock_uring.so 00:05:17.921 SO libspdk_blobfs_bdev.so.6.0 00:05:17.921 SO libspdk_bdev_null.so.6.0 00:05:17.921 LIB libspdk_bdev_gpt.a 00:05:17.921 SYMLINK libspdk_blobfs_bdev.so 00:05:17.921 SO libspdk_bdev_gpt.so.6.0 00:05:17.921 SYMLINK libspdk_bdev_null.so 00:05:17.921 LIB libspdk_bdev_malloc.a 00:05:17.921 CC module/bdev/nvme/bdev_nvme.o 00:05:17.921 LIB libspdk_bdev_lvol.a 00:05:17.921 LIB libspdk_bdev_delay.a 00:05:17.921 SYMLINK libspdk_bdev_gpt.so 00:05:17.921 SO libspdk_bdev_malloc.so.6.0 00:05:17.921 SO libspdk_bdev_delay.so.6.0 00:05:17.921 CC module/bdev/passthru/vbdev_passthru.o 00:05:17.922 SO libspdk_bdev_lvol.so.6.0 00:05:17.922 CC module/bdev/raid/bdev_raid.o 00:05:18.179 SYMLINK libspdk_bdev_lvol.so 00:05:18.179 SYMLINK libspdk_bdev_delay.so 00:05:18.179 CC module/bdev/raid/bdev_raid_rpc.o 00:05:18.179 SYMLINK libspdk_bdev_malloc.so 00:05:18.179 CC module/bdev/split/vbdev_split.o 00:05:18.179 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:18.179 CC module/bdev/uring/bdev_uring.o 00:05:18.179 CC module/bdev/aio/bdev_aio.o 00:05:18.179 CC module/bdev/iscsi/bdev_iscsi.o 00:05:18.436 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:18.436 CC module/bdev/ftl/bdev_ftl.o 00:05:18.436 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:18.436 CC module/bdev/split/vbdev_split_rpc.o 00:05:18.436 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:18.436 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:18.436 CC module/bdev/aio/bdev_aio_rpc.o 00:05:18.436 LIB libspdk_bdev_split.a 00:05:18.436 CC module/bdev/uring/bdev_uring_rpc.o 00:05:18.694 SO libspdk_bdev_split.so.6.0 00:05:18.694 LIB libspdk_bdev_passthru.a 00:05:18.694 SO libspdk_bdev_passthru.so.6.0 00:05:18.694 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:18.694 SYMLINK libspdk_bdev_split.so 00:05:18.694 LIB libspdk_bdev_zone_block.a 00:05:18.694 LIB libspdk_bdev_iscsi.a 00:05:18.694 SO libspdk_bdev_zone_block.so.6.0 00:05:18.694 SYMLINK libspdk_bdev_passthru.so 00:05:18.694 LIB libspdk_bdev_aio.a 00:05:18.694 CC module/bdev/raid/bdev_raid_sb.o 00:05:18.694 SO libspdk_bdev_iscsi.so.6.0 00:05:18.694 LIB libspdk_bdev_uring.a 00:05:18.694 SO libspdk_bdev_aio.so.6.0 00:05:18.694 LIB libspdk_bdev_ftl.a 00:05:18.694 SYMLINK libspdk_bdev_zone_block.so 00:05:18.694 SYMLINK libspdk_bdev_iscsi.so 00:05:18.694 CC module/bdev/nvme/nvme_rpc.o 00:05:18.694 SO libspdk_bdev_uring.so.6.0 00:05:18.694 CC module/bdev/nvme/bdev_mdns_client.o 00:05:18.694 SO libspdk_bdev_ftl.so.6.0 00:05:18.694 SYMLINK libspdk_bdev_aio.so 00:05:18.694 CC module/bdev/nvme/vbdev_opal.o 00:05:18.952 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:18.952 SYMLINK libspdk_bdev_uring.so 00:05:18.952 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:18.952 SYMLINK libspdk_bdev_ftl.so 00:05:18.952 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:18.952 CC module/bdev/raid/raid0.o 00:05:18.952 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:18.952 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:19.210 CC module/bdev/raid/raid1.o 00:05:19.210 CC module/bdev/raid/concat.o 00:05:19.469 LIB libspdk_bdev_virtio.a 00:05:19.469 LIB libspdk_bdev_raid.a 00:05:19.469 SO libspdk_bdev_virtio.so.6.0 00:05:19.469 SO libspdk_bdev_raid.so.6.0 00:05:19.469 SYMLINK libspdk_bdev_virtio.so 00:05:19.469 SYMLINK libspdk_bdev_raid.so 00:05:20.845 LIB libspdk_bdev_nvme.a 00:05:20.845 SO libspdk_bdev_nvme.so.7.1 00:05:20.845 SYMLINK libspdk_bdev_nvme.so 00:05:21.411 CC module/event/subsystems/vmd/vmd.o 00:05:21.411 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:21.411 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:21.411 CC module/event/subsystems/keyring/keyring.o 00:05:21.411 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:21.411 CC module/event/subsystems/fsdev/fsdev.o 00:05:21.411 CC module/event/subsystems/sock/sock.o 00:05:21.411 CC module/event/subsystems/iobuf/iobuf.o 00:05:21.411 CC module/event/subsystems/scheduler/scheduler.o 00:05:21.411 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:21.411 LIB libspdk_event_vfu_tgt.a 00:05:21.411 LIB libspdk_event_keyring.a 00:05:21.411 LIB libspdk_event_vhost_blk.a 00:05:21.411 LIB libspdk_event_vmd.a 00:05:21.411 SO libspdk_event_vfu_tgt.so.3.0 00:05:21.411 LIB libspdk_event_scheduler.a 00:05:21.411 SO libspdk_event_keyring.so.1.0 00:05:21.411 SO libspdk_event_vhost_blk.so.3.0 00:05:21.411 LIB libspdk_event_iobuf.a 00:05:21.411 LIB libspdk_event_fsdev.a 00:05:21.670 SO libspdk_event_vmd.so.6.0 00:05:21.670 SO libspdk_event_scheduler.so.4.0 00:05:21.670 SO libspdk_event_fsdev.so.1.0 00:05:21.670 LIB libspdk_event_sock.a 00:05:21.670 SO libspdk_event_iobuf.so.3.0 00:05:21.670 SYMLINK libspdk_event_vfu_tgt.so 00:05:21.670 SYMLINK libspdk_event_keyring.so 00:05:21.670 SYMLINK libspdk_event_vhost_blk.so 00:05:21.670 SO libspdk_event_sock.so.5.0 00:05:21.670 SYMLINK libspdk_event_vmd.so 00:05:21.670 SYMLINK libspdk_event_iobuf.so 00:05:21.670 SYMLINK libspdk_event_scheduler.so 00:05:21.670 SYMLINK libspdk_event_fsdev.so 00:05:21.670 SYMLINK libspdk_event_sock.so 00:05:21.928 CC module/event/subsystems/accel/accel.o 00:05:21.928 LIB libspdk_event_accel.a 00:05:21.928 SO libspdk_event_accel.so.6.0 00:05:22.187 SYMLINK libspdk_event_accel.so 00:05:22.447 CC module/event/subsystems/bdev/bdev.o 00:05:22.711 LIB libspdk_event_bdev.a 00:05:22.711 SO libspdk_event_bdev.so.6.0 00:05:22.712 SYMLINK libspdk_event_bdev.so 00:05:22.971 CC module/event/subsystems/scsi/scsi.o 00:05:22.971 CC module/event/subsystems/ublk/ublk.o 00:05:22.971 CC module/event/subsystems/nbd/nbd.o 00:05:22.971 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:22.971 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:22.971 LIB libspdk_event_nbd.a 00:05:22.971 LIB libspdk_event_ublk.a 00:05:23.230 LIB libspdk_event_scsi.a 00:05:23.230 SO libspdk_event_nbd.so.6.0 00:05:23.230 SO libspdk_event_ublk.so.3.0 00:05:23.230 SO libspdk_event_scsi.so.6.0 00:05:23.230 SYMLINK libspdk_event_nbd.so 00:05:23.230 SYMLINK libspdk_event_ublk.so 00:05:23.230 SYMLINK libspdk_event_scsi.so 00:05:23.230 LIB libspdk_event_nvmf.a 00:05:23.230 SO libspdk_event_nvmf.so.6.0 00:05:23.230 SYMLINK libspdk_event_nvmf.so 00:05:23.488 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:23.488 CC module/event/subsystems/iscsi/iscsi.o 00:05:23.488 LIB libspdk_event_vhost_scsi.a 00:05:23.746 LIB libspdk_event_iscsi.a 00:05:23.746 SO libspdk_event_vhost_scsi.so.3.0 00:05:23.747 SO libspdk_event_iscsi.so.6.0 00:05:23.747 SYMLINK libspdk_event_vhost_scsi.so 00:05:23.747 SYMLINK libspdk_event_iscsi.so 00:05:24.005 SO libspdk.so.6.0 00:05:24.005 SYMLINK libspdk.so 00:05:24.005 CXX app/trace/trace.o 00:05:24.005 CC app/trace_record/trace_record.o 00:05:24.263 CC app/spdk_lspci/spdk_lspci.o 00:05:24.263 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:24.263 CC app/iscsi_tgt/iscsi_tgt.o 00:05:24.263 CC app/nvmf_tgt/nvmf_main.o 00:05:24.263 CC examples/util/zipf/zipf.o 00:05:24.263 CC test/thread/poller_perf/poller_perf.o 00:05:24.263 CC app/spdk_tgt/spdk_tgt.o 00:05:24.263 CC examples/ioat/perf/perf.o 00:05:24.263 LINK spdk_lspci 00:05:24.522 LINK interrupt_tgt 00:05:24.522 LINK zipf 00:05:24.522 LINK poller_perf 00:05:24.522 LINK nvmf_tgt 00:05:24.522 LINK spdk_trace_record 00:05:24.522 LINK iscsi_tgt 00:05:24.522 LINK spdk_tgt 00:05:24.522 LINK ioat_perf 00:05:24.522 CC app/spdk_nvme_perf/perf.o 00:05:24.522 LINK spdk_trace 00:05:24.781 TEST_HEADER include/spdk/accel.h 00:05:24.781 TEST_HEADER include/spdk/accel_module.h 00:05:24.781 TEST_HEADER include/spdk/assert.h 00:05:24.781 TEST_HEADER include/spdk/barrier.h 00:05:24.781 TEST_HEADER include/spdk/base64.h 00:05:24.781 TEST_HEADER include/spdk/bdev.h 00:05:24.781 TEST_HEADER include/spdk/bdev_module.h 00:05:24.781 TEST_HEADER include/spdk/bdev_zone.h 00:05:24.781 TEST_HEADER include/spdk/bit_array.h 00:05:24.781 TEST_HEADER include/spdk/bit_pool.h 00:05:24.781 TEST_HEADER include/spdk/blob_bdev.h 00:05:24.781 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:24.781 TEST_HEADER include/spdk/blobfs.h 00:05:24.781 TEST_HEADER include/spdk/blob.h 00:05:24.781 TEST_HEADER include/spdk/conf.h 00:05:24.781 CC examples/ioat/verify/verify.o 00:05:24.781 TEST_HEADER include/spdk/config.h 00:05:24.781 TEST_HEADER include/spdk/cpuset.h 00:05:24.781 TEST_HEADER include/spdk/crc16.h 00:05:24.781 TEST_HEADER include/spdk/crc32.h 00:05:24.781 TEST_HEADER include/spdk/crc64.h 00:05:24.781 TEST_HEADER include/spdk/dif.h 00:05:24.781 TEST_HEADER include/spdk/dma.h 00:05:24.781 TEST_HEADER include/spdk/endian.h 00:05:24.781 TEST_HEADER include/spdk/env_dpdk.h 00:05:24.781 TEST_HEADER include/spdk/env.h 00:05:24.781 TEST_HEADER include/spdk/event.h 00:05:24.781 TEST_HEADER include/spdk/fd_group.h 00:05:24.781 TEST_HEADER include/spdk/fd.h 00:05:24.781 TEST_HEADER include/spdk/file.h 00:05:24.781 TEST_HEADER include/spdk/fsdev.h 00:05:24.781 TEST_HEADER include/spdk/fsdev_module.h 00:05:24.781 TEST_HEADER include/spdk/ftl.h 00:05:24.781 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:24.781 TEST_HEADER include/spdk/gpt_spec.h 00:05:24.781 TEST_HEADER include/spdk/hexlify.h 00:05:24.781 TEST_HEADER include/spdk/histogram_data.h 00:05:24.781 TEST_HEADER include/spdk/idxd.h 00:05:24.781 TEST_HEADER include/spdk/idxd_spec.h 00:05:24.781 TEST_HEADER include/spdk/init.h 00:05:24.781 TEST_HEADER include/spdk/ioat.h 00:05:24.781 TEST_HEADER include/spdk/ioat_spec.h 00:05:24.781 TEST_HEADER include/spdk/iscsi_spec.h 00:05:24.781 TEST_HEADER include/spdk/json.h 00:05:24.781 TEST_HEADER include/spdk/jsonrpc.h 00:05:24.781 TEST_HEADER include/spdk/keyring.h 00:05:24.781 TEST_HEADER include/spdk/keyring_module.h 00:05:24.781 TEST_HEADER include/spdk/likely.h 00:05:24.781 TEST_HEADER include/spdk/log.h 00:05:24.781 TEST_HEADER include/spdk/lvol.h 00:05:24.781 TEST_HEADER include/spdk/md5.h 00:05:24.781 TEST_HEADER include/spdk/memory.h 00:05:24.781 TEST_HEADER include/spdk/mmio.h 00:05:24.781 TEST_HEADER include/spdk/nbd.h 00:05:24.781 TEST_HEADER include/spdk/net.h 00:05:24.781 CC test/env/vtophys/vtophys.o 00:05:24.781 TEST_HEADER include/spdk/notify.h 00:05:24.781 CC test/app/bdev_svc/bdev_svc.o 00:05:24.781 CC test/dma/test_dma/test_dma.o 00:05:24.781 TEST_HEADER include/spdk/nvme.h 00:05:24.781 TEST_HEADER include/spdk/nvme_intel.h 00:05:24.781 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:24.781 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:24.781 TEST_HEADER include/spdk/nvme_spec.h 00:05:24.781 TEST_HEADER include/spdk/nvme_zns.h 00:05:24.781 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:24.781 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:24.781 TEST_HEADER include/spdk/nvmf.h 00:05:24.781 TEST_HEADER include/spdk/nvmf_spec.h 00:05:24.781 CC app/spdk_nvme_identify/identify.o 00:05:24.781 TEST_HEADER include/spdk/nvmf_transport.h 00:05:24.781 TEST_HEADER include/spdk/opal.h 00:05:24.781 CC test/event/event_perf/event_perf.o 00:05:24.781 TEST_HEADER include/spdk/opal_spec.h 00:05:25.040 TEST_HEADER include/spdk/pci_ids.h 00:05:25.040 CC app/spdk_nvme_discover/discovery_aer.o 00:05:25.040 TEST_HEADER include/spdk/pipe.h 00:05:25.040 TEST_HEADER include/spdk/queue.h 00:05:25.040 TEST_HEADER include/spdk/reduce.h 00:05:25.040 TEST_HEADER include/spdk/rpc.h 00:05:25.040 TEST_HEADER include/spdk/scheduler.h 00:05:25.040 TEST_HEADER include/spdk/scsi.h 00:05:25.040 TEST_HEADER include/spdk/scsi_spec.h 00:05:25.040 TEST_HEADER include/spdk/sock.h 00:05:25.040 TEST_HEADER include/spdk/stdinc.h 00:05:25.040 TEST_HEADER include/spdk/string.h 00:05:25.040 TEST_HEADER include/spdk/thread.h 00:05:25.040 TEST_HEADER include/spdk/trace.h 00:05:25.040 TEST_HEADER include/spdk/trace_parser.h 00:05:25.040 TEST_HEADER include/spdk/tree.h 00:05:25.040 TEST_HEADER include/spdk/ublk.h 00:05:25.040 CC test/env/mem_callbacks/mem_callbacks.o 00:05:25.040 TEST_HEADER include/spdk/util.h 00:05:25.040 TEST_HEADER include/spdk/uuid.h 00:05:25.040 TEST_HEADER include/spdk/version.h 00:05:25.040 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:25.040 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:25.040 TEST_HEADER include/spdk/vhost.h 00:05:25.040 TEST_HEADER include/spdk/vmd.h 00:05:25.040 TEST_HEADER include/spdk/xor.h 00:05:25.040 TEST_HEADER include/spdk/zipf.h 00:05:25.040 CXX test/cpp_headers/accel.o 00:05:25.040 LINK verify 00:05:25.040 LINK vtophys 00:05:25.040 LINK event_perf 00:05:25.040 LINK bdev_svc 00:05:25.040 LINK spdk_nvme_discover 00:05:25.298 CXX test/cpp_headers/accel_module.o 00:05:25.298 CC app/spdk_top/spdk_top.o 00:05:25.298 CC test/event/reactor/reactor.o 00:05:25.298 CXX test/cpp_headers/assert.o 00:05:25.298 CC examples/thread/thread/thread_ex.o 00:05:25.556 LINK test_dma 00:05:25.556 CC app/vhost/vhost.o 00:05:25.556 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:25.556 LINK reactor 00:05:25.556 LINK spdk_nvme_perf 00:05:25.556 CXX test/cpp_headers/barrier.o 00:05:25.556 LINK mem_callbacks 00:05:25.814 LINK vhost 00:05:25.814 LINK thread 00:05:25.814 CXX test/cpp_headers/base64.o 00:05:25.814 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:25.814 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:25.814 LINK spdk_nvme_identify 00:05:25.814 CC test/event/reactor_perf/reactor_perf.o 00:05:25.814 CC test/event/app_repeat/app_repeat.o 00:05:25.814 CXX test/cpp_headers/bdev.o 00:05:26.073 LINK nvme_fuzz 00:05:26.073 LINK reactor_perf 00:05:26.073 LINK env_dpdk_post_init 00:05:26.073 LINK app_repeat 00:05:26.073 CC test/event/scheduler/scheduler.o 00:05:26.073 CC examples/sock/hello_world/hello_sock.o 00:05:26.073 CC test/app/histogram_perf/histogram_perf.o 00:05:26.073 CXX test/cpp_headers/bdev_module.o 00:05:26.331 LINK spdk_top 00:05:26.331 CC test/app/jsoncat/jsoncat.o 00:05:26.331 CC test/env/memory/memory_ut.o 00:05:26.331 LINK histogram_perf 00:05:26.331 CC test/env/pci/pci_ut.o 00:05:26.331 LINK scheduler 00:05:26.331 CC test/rpc_client/rpc_client_test.o 00:05:26.331 CXX test/cpp_headers/bdev_zone.o 00:05:26.331 LINK hello_sock 00:05:26.331 LINK jsoncat 00:05:26.590 CC app/spdk_dd/spdk_dd.o 00:05:26.590 LINK rpc_client_test 00:05:26.590 CXX test/cpp_headers/bit_array.o 00:05:26.590 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:26.590 CC test/app/stub/stub.o 00:05:26.590 CC app/fio/nvme/fio_plugin.o 00:05:26.590 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:26.849 CC examples/vmd/lsvmd/lsvmd.o 00:05:26.849 LINK pci_ut 00:05:26.849 CXX test/cpp_headers/bit_pool.o 00:05:26.849 LINK stub 00:05:26.849 LINK lsvmd 00:05:27.108 LINK spdk_dd 00:05:27.108 CXX test/cpp_headers/blob_bdev.o 00:05:27.108 CC app/fio/bdev/fio_plugin.o 00:05:27.108 LINK vhost_fuzz 00:05:27.108 CC examples/idxd/perf/perf.o 00:05:27.108 CC examples/vmd/led/led.o 00:05:27.367 CXX test/cpp_headers/blobfs_bdev.o 00:05:27.367 LINK spdk_nvme 00:05:27.367 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:27.367 CXX test/cpp_headers/blobfs.o 00:05:27.367 LINK led 00:05:27.367 CC examples/accel/perf/accel_perf.o 00:05:27.367 LINK iscsi_fuzz 00:05:27.626 CC examples/blob/hello_world/hello_blob.o 00:05:27.626 CXX test/cpp_headers/blob.o 00:05:27.626 LINK memory_ut 00:05:27.626 LINK idxd_perf 00:05:27.626 LINK hello_fsdev 00:05:27.626 LINK spdk_bdev 00:05:27.626 CC examples/nvme/hello_world/hello_world.o 00:05:27.626 CC examples/nvme/reconnect/reconnect.o 00:05:27.626 CXX test/cpp_headers/conf.o 00:05:27.885 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:27.885 LINK hello_blob 00:05:27.885 CC examples/blob/cli/blobcli.o 00:05:27.885 CC examples/nvme/arbitration/arbitration.o 00:05:27.885 CXX test/cpp_headers/config.o 00:05:27.885 LINK hello_world 00:05:27.885 CXX test/cpp_headers/cpuset.o 00:05:27.885 CC examples/nvme/hotplug/hotplug.o 00:05:27.885 CC test/accel/dif/dif.o 00:05:27.885 LINK accel_perf 00:05:28.143 LINK reconnect 00:05:28.143 CXX test/cpp_headers/crc16.o 00:05:28.143 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:28.143 LINK hotplug 00:05:28.143 CC test/blobfs/mkfs/mkfs.o 00:05:28.143 LINK arbitration 00:05:28.143 CC examples/nvme/abort/abort.o 00:05:28.401 LINK nvme_manage 00:05:28.401 CXX test/cpp_headers/crc32.o 00:05:28.401 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:28.401 LINK blobcli 00:05:28.401 LINK cmb_copy 00:05:28.401 LINK mkfs 00:05:28.401 CXX test/cpp_headers/crc64.o 00:05:28.401 LINK pmr_persistence 00:05:28.660 CXX test/cpp_headers/dif.o 00:05:28.660 CXX test/cpp_headers/dma.o 00:05:28.660 CC examples/bdev/hello_world/hello_bdev.o 00:05:28.660 CC examples/bdev/bdevperf/bdevperf.o 00:05:28.660 CXX test/cpp_headers/endian.o 00:05:28.660 LINK dif 00:05:28.660 LINK abort 00:05:28.660 CC test/lvol/esnap/esnap.o 00:05:28.660 CXX test/cpp_headers/env_dpdk.o 00:05:28.660 CXX test/cpp_headers/env.o 00:05:28.918 CXX test/cpp_headers/event.o 00:05:28.918 CC test/nvme/aer/aer.o 00:05:28.918 CC test/nvme/reset/reset.o 00:05:28.918 LINK hello_bdev 00:05:28.918 CC test/nvme/sgl/sgl.o 00:05:28.918 CC test/nvme/overhead/overhead.o 00:05:28.918 CC test/nvme/e2edp/nvme_dp.o 00:05:28.918 CXX test/cpp_headers/fd_group.o 00:05:29.176 LINK reset 00:05:29.176 CC test/nvme/err_injection/err_injection.o 00:05:29.176 LINK aer 00:05:29.176 CXX test/cpp_headers/fd.o 00:05:29.176 CC test/bdev/bdevio/bdevio.o 00:05:29.176 LINK sgl 00:05:29.176 LINK overhead 00:05:29.176 LINK nvme_dp 00:05:29.434 LINK err_injection 00:05:29.434 CXX test/cpp_headers/file.o 00:05:29.434 CC test/nvme/startup/startup.o 00:05:29.434 CC test/nvme/reserve/reserve.o 00:05:29.434 CC test/nvme/simple_copy/simple_copy.o 00:05:29.434 LINK bdevperf 00:05:29.434 CC test/nvme/connect_stress/connect_stress.o 00:05:29.434 CC test/nvme/boot_partition/boot_partition.o 00:05:29.434 CXX test/cpp_headers/fsdev.o 00:05:29.434 LINK startup 00:05:29.434 CC test/nvme/compliance/nvme_compliance.o 00:05:29.693 LINK bdevio 00:05:29.693 LINK reserve 00:05:29.693 LINK boot_partition 00:05:29.693 LINK connect_stress 00:05:29.693 CXX test/cpp_headers/fsdev_module.o 00:05:29.693 LINK simple_copy 00:05:29.693 CC test/nvme/fused_ordering/fused_ordering.o 00:05:29.952 CXX test/cpp_headers/ftl.o 00:05:29.952 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:29.952 CC test/nvme/fdp/fdp.o 00:05:29.952 CXX test/cpp_headers/fuse_dispatcher.o 00:05:29.952 LINK nvme_compliance 00:05:29.952 CC examples/nvmf/nvmf/nvmf.o 00:05:29.952 CXX test/cpp_headers/gpt_spec.o 00:05:29.952 CC test/nvme/cuse/cuse.o 00:05:29.952 LINK fused_ordering 00:05:29.952 CXX test/cpp_headers/hexlify.o 00:05:29.952 CXX test/cpp_headers/histogram_data.o 00:05:30.210 CXX test/cpp_headers/idxd.o 00:05:30.210 LINK doorbell_aers 00:05:30.210 CXX test/cpp_headers/idxd_spec.o 00:05:30.210 CXX test/cpp_headers/init.o 00:05:30.210 CXX test/cpp_headers/ioat.o 00:05:30.210 LINK nvmf 00:05:30.210 CXX test/cpp_headers/ioat_spec.o 00:05:30.210 CXX test/cpp_headers/iscsi_spec.o 00:05:30.210 CXX test/cpp_headers/json.o 00:05:30.210 CXX test/cpp_headers/jsonrpc.o 00:05:30.210 LINK fdp 00:05:30.469 CXX test/cpp_headers/keyring.o 00:05:30.469 CXX test/cpp_headers/keyring_module.o 00:05:30.469 CXX test/cpp_headers/likely.o 00:05:30.469 CXX test/cpp_headers/log.o 00:05:30.469 CXX test/cpp_headers/lvol.o 00:05:30.469 CXX test/cpp_headers/md5.o 00:05:30.469 CXX test/cpp_headers/memory.o 00:05:30.469 CXX test/cpp_headers/mmio.o 00:05:30.469 CXX test/cpp_headers/nbd.o 00:05:30.469 CXX test/cpp_headers/net.o 00:05:30.728 CXX test/cpp_headers/notify.o 00:05:30.728 CXX test/cpp_headers/nvme.o 00:05:30.728 CXX test/cpp_headers/nvme_intel.o 00:05:30.728 CXX test/cpp_headers/nvme_ocssd.o 00:05:30.728 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:30.728 CXX test/cpp_headers/nvme_spec.o 00:05:30.728 CXX test/cpp_headers/nvme_zns.o 00:05:30.728 CXX test/cpp_headers/nvmf_cmd.o 00:05:30.728 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:30.728 CXX test/cpp_headers/nvmf.o 00:05:30.728 CXX test/cpp_headers/nvmf_spec.o 00:05:30.728 CXX test/cpp_headers/nvmf_transport.o 00:05:30.728 CXX test/cpp_headers/opal.o 00:05:30.728 CXX test/cpp_headers/opal_spec.o 00:05:30.987 CXX test/cpp_headers/pci_ids.o 00:05:30.987 CXX test/cpp_headers/pipe.o 00:05:30.987 CXX test/cpp_headers/queue.o 00:05:30.987 CXX test/cpp_headers/reduce.o 00:05:30.987 CXX test/cpp_headers/rpc.o 00:05:30.987 CXX test/cpp_headers/scheduler.o 00:05:30.987 CXX test/cpp_headers/scsi.o 00:05:30.987 CXX test/cpp_headers/scsi_spec.o 00:05:30.987 CXX test/cpp_headers/sock.o 00:05:30.987 CXX test/cpp_headers/stdinc.o 00:05:31.246 CXX test/cpp_headers/string.o 00:05:31.246 CXX test/cpp_headers/thread.o 00:05:31.246 CXX test/cpp_headers/trace.o 00:05:31.246 CXX test/cpp_headers/trace_parser.o 00:05:31.246 CXX test/cpp_headers/tree.o 00:05:31.246 CXX test/cpp_headers/ublk.o 00:05:31.246 CXX test/cpp_headers/util.o 00:05:31.246 CXX test/cpp_headers/uuid.o 00:05:31.246 CXX test/cpp_headers/version.o 00:05:31.246 LINK cuse 00:05:31.246 CXX test/cpp_headers/vfio_user_pci.o 00:05:31.246 CXX test/cpp_headers/vfio_user_spec.o 00:05:31.505 CXX test/cpp_headers/vhost.o 00:05:31.505 CXX test/cpp_headers/vmd.o 00:05:31.505 CXX test/cpp_headers/xor.o 00:05:31.505 CXX test/cpp_headers/zipf.o 00:05:34.047 LINK esnap 00:05:34.306 00:05:34.306 real 1m27.337s 00:05:34.306 user 7m5.240s 00:05:34.306 sys 1m10.994s 00:05:34.306 ************************************ 00:05:34.306 END TEST make 00:05:34.306 ************************************ 00:05:34.306 16:00:40 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:34.306 16:00:40 make -- common/autotest_common.sh@10 -- $ set +x 00:05:34.565 16:00:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:34.565 16:00:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:34.566 16:00:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:34.566 16:00:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:34.566 16:00:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:34.566 16:00:41 -- pm/common@44 -- $ pid=6040 00:05:34.566 16:00:41 -- pm/common@50 -- $ kill -TERM 6040 00:05:34.566 16:00:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:34.566 16:00:41 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:34.566 16:00:41 -- pm/common@44 -- $ pid=6041 00:05:34.566 16:00:41 -- pm/common@50 -- $ kill -TERM 6041 00:05:34.566 16:00:41 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:34.566 16:00:41 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:34.566 16:00:41 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:34.566 16:00:41 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:34.566 16:00:41 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:34.566 16:00:41 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:34.566 16:00:41 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.566 16:00:41 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.566 16:00:41 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.566 16:00:41 -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.566 16:00:41 -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.566 16:00:41 -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.566 16:00:41 -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.566 16:00:41 -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.566 16:00:41 -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.566 16:00:41 -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.566 16:00:41 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.566 16:00:41 -- scripts/common.sh@344 -- # case "$op" in 00:05:34.566 16:00:41 -- scripts/common.sh@345 -- # : 1 00:05:34.566 16:00:41 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.566 16:00:41 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.566 16:00:41 -- scripts/common.sh@365 -- # decimal 1 00:05:34.566 16:00:41 -- scripts/common.sh@353 -- # local d=1 00:05:34.566 16:00:41 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.566 16:00:41 -- scripts/common.sh@355 -- # echo 1 00:05:34.566 16:00:41 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.566 16:00:41 -- scripts/common.sh@366 -- # decimal 2 00:05:34.566 16:00:41 -- scripts/common.sh@353 -- # local d=2 00:05:34.566 16:00:41 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.566 16:00:41 -- scripts/common.sh@355 -- # echo 2 00:05:34.566 16:00:41 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.566 16:00:41 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.566 16:00:41 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.566 16:00:41 -- scripts/common.sh@368 -- # return 0 00:05:34.566 16:00:41 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.566 16:00:41 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:34.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.566 --rc genhtml_branch_coverage=1 00:05:34.566 --rc genhtml_function_coverage=1 00:05:34.566 --rc genhtml_legend=1 00:05:34.566 --rc geninfo_all_blocks=1 00:05:34.566 --rc geninfo_unexecuted_blocks=1 00:05:34.566 00:05:34.566 ' 00:05:34.566 16:00:41 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:34.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.566 --rc genhtml_branch_coverage=1 00:05:34.566 --rc genhtml_function_coverage=1 00:05:34.566 --rc genhtml_legend=1 00:05:34.566 --rc geninfo_all_blocks=1 00:05:34.566 --rc geninfo_unexecuted_blocks=1 00:05:34.566 00:05:34.566 ' 00:05:34.566 16:00:41 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:34.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.566 --rc genhtml_branch_coverage=1 00:05:34.566 --rc genhtml_function_coverage=1 00:05:34.566 --rc genhtml_legend=1 00:05:34.566 --rc geninfo_all_blocks=1 00:05:34.566 --rc geninfo_unexecuted_blocks=1 00:05:34.566 00:05:34.566 ' 00:05:34.566 16:00:41 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:34.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.566 --rc genhtml_branch_coverage=1 00:05:34.566 --rc genhtml_function_coverage=1 00:05:34.566 --rc genhtml_legend=1 00:05:34.566 --rc geninfo_all_blocks=1 00:05:34.566 --rc geninfo_unexecuted_blocks=1 00:05:34.566 00:05:34.566 ' 00:05:34.566 16:00:41 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:34.566 16:00:41 -- nvmf/common.sh@7 -- # uname -s 00:05:34.566 16:00:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:34.566 16:00:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:34.566 16:00:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:34.566 16:00:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:34.566 16:00:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:34.566 16:00:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:34.566 16:00:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:34.566 16:00:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:34.566 16:00:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:34.566 16:00:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:34.566 16:00:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:05:34.566 16:00:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:05:34.566 16:00:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:34.566 16:00:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:34.566 16:00:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:34.566 16:00:41 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:34.566 16:00:41 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:34.566 16:00:41 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:34.566 16:00:41 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:34.566 16:00:41 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:34.566 16:00:41 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:34.566 16:00:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.566 16:00:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.566 16:00:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.566 16:00:41 -- paths/export.sh@5 -- # export PATH 00:05:34.566 16:00:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:34.566 16:00:41 -- nvmf/common.sh@51 -- # : 0 00:05:34.566 16:00:41 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:34.566 16:00:41 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:34.566 16:00:41 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:34.566 16:00:41 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:34.566 16:00:41 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:34.566 16:00:41 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:34.566 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:34.566 16:00:41 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:34.566 16:00:41 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:34.566 16:00:41 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:34.566 16:00:41 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:34.566 16:00:41 -- spdk/autotest.sh@32 -- # uname -s 00:05:34.826 16:00:41 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:34.826 16:00:41 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:34.826 16:00:41 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:34.826 16:00:41 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:34.826 16:00:41 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:34.826 16:00:41 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:34.826 16:00:41 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:34.826 16:00:41 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:34.826 16:00:41 -- spdk/autotest.sh@48 -- # udevadm_pid=67589 00:05:34.826 16:00:41 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:34.826 16:00:41 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:34.826 16:00:41 -- pm/common@17 -- # local monitor 00:05:34.826 16:00:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:34.826 16:00:41 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:34.826 16:00:41 -- pm/common@25 -- # sleep 1 00:05:34.826 16:00:41 -- pm/common@21 -- # date +%s 00:05:34.826 16:00:41 -- pm/common@21 -- # date +%s 00:05:34.826 16:00:41 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732032041 00:05:34.826 16:00:41 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732032041 00:05:34.826 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732032041_collect-cpu-load.pm.log 00:05:34.826 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732032041_collect-vmstat.pm.log 00:05:35.763 16:00:42 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:35.763 16:00:42 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:35.763 16:00:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:35.763 16:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:35.763 16:00:42 -- spdk/autotest.sh@59 -- # create_test_list 00:05:35.763 16:00:42 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:35.763 16:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:35.763 16:00:42 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:35.763 16:00:42 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:35.763 16:00:42 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:35.763 16:00:42 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:35.763 16:00:42 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:35.763 16:00:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:35.763 16:00:42 -- common/autotest_common.sh@1457 -- # uname 00:05:35.763 16:00:42 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:35.763 16:00:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:35.763 16:00:42 -- common/autotest_common.sh@1477 -- # uname 00:05:35.763 16:00:42 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:35.763 16:00:42 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:35.763 16:00:42 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:36.021 lcov: LCOV version 1.15 00:05:36.021 16:00:42 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:54.108 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:54.108 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:12.196 16:01:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:12.196 16:01:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:12.196 16:01:16 -- common/autotest_common.sh@10 -- # set +x 00:06:12.196 16:01:16 -- spdk/autotest.sh@78 -- # rm -f 00:06:12.196 16:01:16 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:12.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:12.196 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:12.196 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:12.196 16:01:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:12.196 16:01:17 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:12.196 16:01:17 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:12.196 16:01:17 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:12.196 16:01:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.196 16:01:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:12.196 16:01:17 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:12.196 16:01:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:12.196 16:01:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.196 16:01:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.196 16:01:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:06:12.196 16:01:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:12.196 16:01:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:12.196 16:01:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.196 16:01:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.196 16:01:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:06:12.196 16:01:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:06:12.196 16:01:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:12.196 16:01:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.196 16:01:17 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:12.196 16:01:17 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:06:12.196 16:01:17 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:06:12.196 16:01:17 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:12.196 16:01:17 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:12.196 16:01:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:12.196 16:01:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.196 16:01:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.196 16:01:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:12.196 16:01:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:12.196 16:01:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:12.196 No valid GPT data, bailing 00:06:12.196 16:01:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:12.196 16:01:17 -- scripts/common.sh@394 -- # pt= 00:06:12.196 16:01:17 -- scripts/common.sh@395 -- # return 1 00:06:12.196 16:01:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:12.196 1+0 records in 00:06:12.196 1+0 records out 00:06:12.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00526555 s, 199 MB/s 00:06:12.196 16:01:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.196 16:01:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.196 16:01:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:12.196 16:01:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:12.196 16:01:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:12.196 No valid GPT data, bailing 00:06:12.196 16:01:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:12.196 16:01:17 -- scripts/common.sh@394 -- # pt= 00:06:12.196 16:01:17 -- scripts/common.sh@395 -- # return 1 00:06:12.196 16:01:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:12.196 1+0 records in 00:06:12.196 1+0 records out 00:06:12.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473251 s, 222 MB/s 00:06:12.196 16:01:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.196 16:01:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.196 16:01:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:12.196 16:01:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:12.196 16:01:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:12.196 No valid GPT data, bailing 00:06:12.196 16:01:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:12.196 16:01:17 -- scripts/common.sh@394 -- # pt= 00:06:12.196 16:01:17 -- scripts/common.sh@395 -- # return 1 00:06:12.196 16:01:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:12.196 1+0 records in 00:06:12.196 1+0 records out 00:06:12.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0047434 s, 221 MB/s 00:06:12.196 16:01:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:12.196 16:01:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:12.196 16:01:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:12.196 16:01:17 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:12.196 16:01:17 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:12.196 No valid GPT data, bailing 00:06:12.196 16:01:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:12.196 16:01:17 -- scripts/common.sh@394 -- # pt= 00:06:12.196 16:01:17 -- scripts/common.sh@395 -- # return 1 00:06:12.196 16:01:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:12.196 1+0 records in 00:06:12.196 1+0 records out 00:06:12.196 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00375777 s, 279 MB/s 00:06:12.196 16:01:17 -- spdk/autotest.sh@105 -- # sync 00:06:12.196 16:01:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:12.196 16:01:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:12.196 16:01:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:13.131 16:01:19 -- spdk/autotest.sh@111 -- # uname -s 00:06:13.131 16:01:19 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:13.131 16:01:19 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:13.131 16:01:19 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:13.699 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:13.699 Hugepages 00:06:13.699 node hugesize free / total 00:06:13.699 node0 1048576kB 0 / 0 00:06:13.699 node0 2048kB 0 / 0 00:06:13.699 00:06:13.699 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:13.699 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:13.957 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:13.957 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:13.957 16:01:20 -- spdk/autotest.sh@117 -- # uname -s 00:06:13.957 16:01:20 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:13.957 16:01:20 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:13.957 16:01:20 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:14.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:14.866 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:14.866 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:14.866 16:01:21 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:15.802 16:01:22 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:15.802 16:01:22 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:15.802 16:01:22 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:15.802 16:01:22 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:15.802 16:01:22 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:15.802 16:01:22 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:15.802 16:01:22 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:15.802 16:01:22 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:15.802 16:01:22 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:15.802 16:01:22 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:15.802 16:01:22 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:15.802 16:01:22 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:16.369 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.369 Waiting for block devices as requested 00:06:16.369 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:16.369 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:16.369 16:01:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:16.369 16:01:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:16.369 16:01:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:06:16.369 16:01:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:16.369 16:01:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:16.369 16:01:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:16.369 16:01:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:16.369 16:01:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:06:16.369 16:01:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:06:16.369 16:01:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:06:16.369 16:01:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:06:16.369 16:01:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:16.369 16:01:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:16.369 16:01:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:16.369 16:01:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:16.369 16:01:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:16.369 16:01:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:06:16.369 16:01:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:16.369 16:01:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:16.369 16:01:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:16.369 16:01:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:16.369 16:01:23 -- common/autotest_common.sh@1543 -- # continue 00:06:16.370 16:01:23 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:16.370 16:01:23 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:16.370 16:01:23 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:16.370 16:01:23 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:06:16.370 16:01:23 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:16.370 16:01:23 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:16.370 16:01:23 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:16.370 16:01:23 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:16.370 16:01:23 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:16.370 16:01:23 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:16.628 16:01:23 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:16.628 16:01:23 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:16.628 16:01:23 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:16.628 16:01:23 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:06:16.628 16:01:23 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:16.628 16:01:23 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:16.628 16:01:23 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:16.628 16:01:23 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:16.628 16:01:23 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:16.628 16:01:23 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:16.628 16:01:23 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:16.628 16:01:23 -- common/autotest_common.sh@1543 -- # continue 00:06:16.628 16:01:23 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:16.628 16:01:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.628 16:01:23 -- common/autotest_common.sh@10 -- # set +x 00:06:16.628 16:01:23 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:16.628 16:01:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:16.628 16:01:23 -- common/autotest_common.sh@10 -- # set +x 00:06:16.628 16:01:23 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:17.195 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:17.195 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:17.195 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:17.454 16:01:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:17.454 16:01:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.454 16:01:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.454 16:01:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:17.454 16:01:23 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:17.454 16:01:23 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:17.454 16:01:24 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:17.454 16:01:24 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:17.454 16:01:24 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:17.454 16:01:24 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:17.454 16:01:24 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:17.454 16:01:24 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:17.454 16:01:24 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:17.454 16:01:24 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:17.454 16:01:24 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:17.454 16:01:24 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:17.454 16:01:24 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:06:17.454 16:01:24 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:17.454 16:01:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:17.454 16:01:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:17.454 16:01:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:17.454 16:01:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:17.454 16:01:24 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:17.454 16:01:24 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:17.454 16:01:24 -- common/autotest_common.sh@1566 -- # device=0x0010 00:06:17.454 16:01:24 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:17.454 16:01:24 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:06:17.454 16:01:24 -- common/autotest_common.sh@1572 -- # return 0 00:06:17.454 16:01:24 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:06:17.454 16:01:24 -- common/autotest_common.sh@1580 -- # return 0 00:06:17.454 16:01:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:17.454 16:01:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:17.454 16:01:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:17.454 16:01:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:17.454 16:01:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:17.454 16:01:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.454 16:01:24 -- common/autotest_common.sh@10 -- # set +x 00:06:17.454 16:01:24 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:17.454 16:01:24 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:17.454 16:01:24 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:17.454 16:01:24 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:17.454 16:01:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.454 16:01:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.454 16:01:24 -- common/autotest_common.sh@10 -- # set +x 00:06:17.454 ************************************ 00:06:17.454 START TEST env 00:06:17.454 ************************************ 00:06:17.454 16:01:24 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:17.713 * Looking for test storage... 00:06:17.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:17.713 16:01:24 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.713 16:01:24 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.713 16:01:24 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.713 16:01:24 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.713 16:01:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.713 16:01:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.713 16:01:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.713 16:01:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.713 16:01:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.713 16:01:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.713 16:01:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.713 16:01:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.713 16:01:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.713 16:01:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.713 16:01:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.713 16:01:24 env -- scripts/common.sh@344 -- # case "$op" in 00:06:17.713 16:01:24 env -- scripts/common.sh@345 -- # : 1 00:06:17.713 16:01:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.713 16:01:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.713 16:01:24 env -- scripts/common.sh@365 -- # decimal 1 00:06:17.713 16:01:24 env -- scripts/common.sh@353 -- # local d=1 00:06:17.713 16:01:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.713 16:01:24 env -- scripts/common.sh@355 -- # echo 1 00:06:17.713 16:01:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.713 16:01:24 env -- scripts/common.sh@366 -- # decimal 2 00:06:17.713 16:01:24 env -- scripts/common.sh@353 -- # local d=2 00:06:17.713 16:01:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.713 16:01:24 env -- scripts/common.sh@355 -- # echo 2 00:06:17.713 16:01:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.713 16:01:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.713 16:01:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.713 16:01:24 env -- scripts/common.sh@368 -- # return 0 00:06:17.713 16:01:24 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.713 16:01:24 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.713 --rc genhtml_branch_coverage=1 00:06:17.713 --rc genhtml_function_coverage=1 00:06:17.713 --rc genhtml_legend=1 00:06:17.713 --rc geninfo_all_blocks=1 00:06:17.713 --rc geninfo_unexecuted_blocks=1 00:06:17.713 00:06:17.713 ' 00:06:17.713 16:01:24 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.713 --rc genhtml_branch_coverage=1 00:06:17.713 --rc genhtml_function_coverage=1 00:06:17.713 --rc genhtml_legend=1 00:06:17.713 --rc geninfo_all_blocks=1 00:06:17.713 --rc geninfo_unexecuted_blocks=1 00:06:17.713 00:06:17.713 ' 00:06:17.713 16:01:24 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.713 --rc genhtml_branch_coverage=1 00:06:17.713 --rc genhtml_function_coverage=1 00:06:17.713 --rc genhtml_legend=1 00:06:17.713 --rc geninfo_all_blocks=1 00:06:17.713 --rc geninfo_unexecuted_blocks=1 00:06:17.713 00:06:17.713 ' 00:06:17.713 16:01:24 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.713 --rc genhtml_branch_coverage=1 00:06:17.713 --rc genhtml_function_coverage=1 00:06:17.713 --rc genhtml_legend=1 00:06:17.713 --rc geninfo_all_blocks=1 00:06:17.713 --rc geninfo_unexecuted_blocks=1 00:06:17.713 00:06:17.713 ' 00:06:17.713 16:01:24 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:17.713 16:01:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.713 16:01:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.713 16:01:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.713 ************************************ 00:06:17.713 START TEST env_memory 00:06:17.713 ************************************ 00:06:17.713 16:01:24 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:17.713 00:06:17.713 00:06:17.713 CUnit - A unit testing framework for C - Version 2.1-3 00:06:17.713 http://cunit.sourceforge.net/ 00:06:17.713 00:06:17.713 00:06:17.713 Suite: memory 00:06:17.714 Test: alloc and free memory map ...[2024-11-19 16:01:24.353065] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:17.714 passed 00:06:17.714 Test: mem map translation ...[2024-11-19 16:01:24.384313] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:17.714 [2024-11-19 16:01:24.384548] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:17.714 [2024-11-19 16:01:24.384706] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:17.714 [2024-11-19 16:01:24.384722] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:17.972 passed 00:06:17.972 Test: mem map registration ...[2024-11-19 16:01:24.448852] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:17.972 [2024-11-19 16:01:24.449056] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:17.973 passed 00:06:17.973 Test: mem map adjacent registrations ...passed 00:06:17.973 00:06:17.973 Run Summary: Type Total Ran Passed Failed Inactive 00:06:17.973 suites 1 1 n/a 0 0 00:06:17.973 tests 4 4 4 0 0 00:06:17.973 asserts 152 152 152 0 n/a 00:06:17.973 00:06:17.973 Elapsed time = 0.213 seconds 00:06:17.973 00:06:17.973 real 0m0.232s 00:06:17.973 user 0m0.218s 00:06:17.973 sys 0m0.008s 00:06:17.973 16:01:24 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.973 16:01:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:17.973 ************************************ 00:06:17.973 END TEST env_memory 00:06:17.973 ************************************ 00:06:17.973 16:01:24 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:17.973 16:01:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.973 16:01:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.973 16:01:24 env -- common/autotest_common.sh@10 -- # set +x 00:06:17.973 ************************************ 00:06:17.973 START TEST env_vtophys 00:06:17.973 ************************************ 00:06:17.973 16:01:24 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:17.973 EAL: lib.eal log level changed from notice to debug 00:06:17.973 EAL: Detected lcore 0 as core 0 on socket 0 00:06:17.973 EAL: Detected lcore 1 as core 0 on socket 0 00:06:17.973 EAL: Detected lcore 2 as core 0 on socket 0 00:06:17.973 EAL: Detected lcore 3 as core 0 on socket 0 00:06:17.973 EAL: Detected lcore 4 as core 0 on socket 0 00:06:17.973 EAL: Detected lcore 5 as core 0 on socket 0 00:06:17.973 EAL: Detected lcore 6 as core 0 on socket 0 00:06:17.973 EAL: Detected lcore 7 as core 0 on socket 0 00:06:17.973 EAL: Detected lcore 8 as core 0 on socket 0 00:06:17.973 EAL: Detected lcore 9 as core 0 on socket 0 00:06:17.973 EAL: Maximum logical cores by configuration: 128 00:06:17.973 EAL: Detected CPU lcores: 10 00:06:17.973 EAL: Detected NUMA nodes: 1 00:06:17.973 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:17.973 EAL: Detected shared linkage of DPDK 00:06:17.973 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:17.973 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:17.973 EAL: Registered [vdev] bus. 00:06:17.973 EAL: bus.vdev log level changed from disabled to notice 00:06:17.973 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:17.973 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:17.973 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:17.973 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:17.973 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:17.973 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:17.973 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:17.973 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:17.973 EAL: No shared files mode enabled, IPC will be disabled 00:06:17.973 EAL: No shared files mode enabled, IPC is disabled 00:06:17.973 EAL: Selected IOVA mode 'PA' 00:06:17.973 EAL: Probing VFIO support... 00:06:17.973 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:17.973 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:17.973 EAL: Ask a virtual area of 0x2e000 bytes 00:06:17.973 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:17.973 EAL: Setting up physically contiguous memory... 00:06:17.973 EAL: Setting maximum number of open files to 524288 00:06:17.973 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:17.973 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:17.973 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.973 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:17.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.973 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.973 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:17.973 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:17.973 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.973 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:17.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.973 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.973 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:17.973 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:17.973 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.973 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:17.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.973 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.973 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:17.973 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:17.973 EAL: Ask a virtual area of 0x61000 bytes 00:06:17.973 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:17.973 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:17.973 EAL: Ask a virtual area of 0x400000000 bytes 00:06:17.973 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:17.973 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:17.973 EAL: Hugepages will be freed exactly as allocated. 00:06:17.973 EAL: No shared files mode enabled, IPC is disabled 00:06:17.973 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: TSC frequency is ~2200000 KHz 00:06:18.233 EAL: Main lcore 0 is ready (tid=7fa86ec12a00;cpuset=[0]) 00:06:18.233 EAL: Trying to obtain current memory policy. 00:06:18.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.233 EAL: Restoring previous memory policy: 0 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was expanded by 2MB 00:06:18.233 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:18.233 EAL: Mem event callback 'spdk:(nil)' registered 00:06:18.233 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:18.233 00:06:18.233 00:06:18.233 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.233 http://cunit.sourceforge.net/ 00:06:18.233 00:06:18.233 00:06:18.233 Suite: components_suite 00:06:18.233 Test: vtophys_malloc_test ...passed 00:06:18.233 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:18.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.233 EAL: Restoring previous memory policy: 4 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was expanded by 4MB 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was shrunk by 4MB 00:06:18.233 EAL: Trying to obtain current memory policy. 00:06:18.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.233 EAL: Restoring previous memory policy: 4 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was expanded by 6MB 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was shrunk by 6MB 00:06:18.233 EAL: Trying to obtain current memory policy. 00:06:18.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.233 EAL: Restoring previous memory policy: 4 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was expanded by 10MB 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was shrunk by 10MB 00:06:18.233 EAL: Trying to obtain current memory policy. 00:06:18.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.233 EAL: Restoring previous memory policy: 4 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was expanded by 18MB 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was shrunk by 18MB 00:06:18.233 EAL: Trying to obtain current memory policy. 00:06:18.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.233 EAL: Restoring previous memory policy: 4 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was expanded by 34MB 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was shrunk by 34MB 00:06:18.233 EAL: Trying to obtain current memory policy. 00:06:18.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.233 EAL: Restoring previous memory policy: 4 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was expanded by 66MB 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was shrunk by 66MB 00:06:18.233 EAL: Trying to obtain current memory policy. 00:06:18.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.233 EAL: Restoring previous memory policy: 4 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was expanded by 130MB 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was shrunk by 130MB 00:06:18.233 EAL: Trying to obtain current memory policy. 00:06:18.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.233 EAL: Restoring previous memory policy: 4 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was expanded by 258MB 00:06:18.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.233 EAL: request: mp_malloc_sync 00:06:18.233 EAL: No shared files mode enabled, IPC is disabled 00:06:18.233 EAL: Heap on socket 0 was shrunk by 258MB 00:06:18.233 EAL: Trying to obtain current memory policy. 00:06:18.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.492 EAL: Restoring previous memory policy: 4 00:06:18.492 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.492 EAL: request: mp_malloc_sync 00:06:18.492 EAL: No shared files mode enabled, IPC is disabled 00:06:18.492 EAL: Heap on socket 0 was expanded by 514MB 00:06:18.492 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.492 EAL: request: mp_malloc_sync 00:06:18.492 EAL: No shared files mode enabled, IPC is disabled 00:06:18.492 EAL: Heap on socket 0 was shrunk by 514MB 00:06:18.492 EAL: Trying to obtain current memory policy. 00:06:18.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.750 EAL: Restoring previous memory policy: 4 00:06:18.750 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.750 EAL: request: mp_malloc_sync 00:06:18.750 EAL: No shared files mode enabled, IPC is disabled 00:06:18.750 EAL: Heap on socket 0 was expanded by 1026MB 00:06:18.750 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.750 passed 00:06:18.750 00:06:18.750 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.750 suites 1 1 n/a 0 0 00:06:18.750 tests 2 2 2 0 0 00:06:18.750 asserts 5596 5596 5596 0 n/a 00:06:18.750 00:06:18.750 Elapsed time = 0.655 seconds 00:06:18.750 EAL: request: mp_malloc_sync 00:06:18.750 EAL: No shared files mode enabled, IPC is disabled 00:06:18.750 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:18.750 EAL: Calling mem event callback 'spdk:(nil)' 00:06:18.750 EAL: request: mp_malloc_sync 00:06:18.750 EAL: No shared files mode enabled, IPC is disabled 00:06:18.750 EAL: Heap on socket 0 was shrunk by 2MB 00:06:18.750 EAL: No shared files mode enabled, IPC is disabled 00:06:18.750 EAL: No shared files mode enabled, IPC is disabled 00:06:18.750 EAL: No shared files mode enabled, IPC is disabled 00:06:18.750 00:06:18.750 real 0m0.858s 00:06:18.750 user 0m0.435s 00:06:18.750 sys 0m0.292s 00:06:18.750 16:01:25 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.750 ************************************ 00:06:18.750 END TEST env_vtophys 00:06:18.750 ************************************ 00:06:18.750 16:01:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:19.009 16:01:25 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:19.009 16:01:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.009 16:01:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.009 16:01:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.009 ************************************ 00:06:19.009 START TEST env_pci 00:06:19.009 ************************************ 00:06:19.009 16:01:25 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:19.009 00:06:19.009 00:06:19.009 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.009 http://cunit.sourceforge.net/ 00:06:19.009 00:06:19.009 00:06:19.009 Suite: pci 00:06:19.009 Test: pci_hook ...[2024-11-19 16:01:25.517665] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 69868 has claimed it 00:06:19.009 passed 00:06:19.009 00:06:19.009 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.009 suites 1 1 n/a 0 0 00:06:19.009 tests 1 1 1 0 0 00:06:19.009 asserts 25 25 25 0 n/a 00:06:19.009 00:06:19.009 Elapsed time = 0.002 seconds 00:06:19.009 EAL: Cannot find device (10000:00:01.0) 00:06:19.009 EAL: Failed to attach device on primary process 00:06:19.009 00:06:19.009 real 0m0.020s 00:06:19.009 user 0m0.010s 00:06:19.009 sys 0m0.009s 00:06:19.009 16:01:25 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.009 ************************************ 00:06:19.009 END TEST env_pci 00:06:19.009 ************************************ 00:06:19.009 16:01:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:19.009 16:01:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:19.009 16:01:25 env -- env/env.sh@15 -- # uname 00:06:19.009 16:01:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:19.009 16:01:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:19.009 16:01:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:19.009 16:01:25 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:19.009 16:01:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.009 16:01:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.009 ************************************ 00:06:19.009 START TEST env_dpdk_post_init 00:06:19.009 ************************************ 00:06:19.009 16:01:25 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:19.009 EAL: Detected CPU lcores: 10 00:06:19.009 EAL: Detected NUMA nodes: 1 00:06:19.009 EAL: Detected shared linkage of DPDK 00:06:19.009 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:19.009 EAL: Selected IOVA mode 'PA' 00:06:19.009 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:19.268 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:19.268 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:19.268 Starting DPDK initialization... 00:06:19.268 Starting SPDK post initialization... 00:06:19.268 SPDK NVMe probe 00:06:19.268 Attaching to 0000:00:10.0 00:06:19.268 Attaching to 0000:00:11.0 00:06:19.268 Attached to 0000:00:10.0 00:06:19.268 Attached to 0000:00:11.0 00:06:19.268 Cleaning up... 00:06:19.268 00:06:19.268 real 0m0.188s 00:06:19.268 user 0m0.056s 00:06:19.268 sys 0m0.032s 00:06:19.268 16:01:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.268 16:01:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:19.268 ************************************ 00:06:19.268 END TEST env_dpdk_post_init 00:06:19.268 ************************************ 00:06:19.268 16:01:25 env -- env/env.sh@26 -- # uname 00:06:19.268 16:01:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:19.268 16:01:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:19.268 16:01:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.268 16:01:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.268 16:01:25 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.268 ************************************ 00:06:19.268 START TEST env_mem_callbacks 00:06:19.268 ************************************ 00:06:19.268 16:01:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:19.268 EAL: Detected CPU lcores: 10 00:06:19.268 EAL: Detected NUMA nodes: 1 00:06:19.268 EAL: Detected shared linkage of DPDK 00:06:19.268 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:19.268 EAL: Selected IOVA mode 'PA' 00:06:19.268 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:19.268 00:06:19.268 00:06:19.268 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.268 http://cunit.sourceforge.net/ 00:06:19.268 00:06:19.268 00:06:19.268 Suite: memory 00:06:19.268 Test: test ... 00:06:19.268 register 0x200000200000 2097152 00:06:19.268 malloc 3145728 00:06:19.268 register 0x200000400000 4194304 00:06:19.268 buf 0x200000500000 len 3145728 PASSED 00:06:19.268 malloc 64 00:06:19.268 buf 0x2000004fff40 len 64 PASSED 00:06:19.268 malloc 4194304 00:06:19.268 register 0x200000800000 6291456 00:06:19.268 buf 0x200000a00000 len 4194304 PASSED 00:06:19.268 free 0x200000500000 3145728 00:06:19.268 free 0x2000004fff40 64 00:06:19.268 unregister 0x200000400000 4194304 PASSED 00:06:19.268 free 0x200000a00000 4194304 00:06:19.268 unregister 0x200000800000 6291456 PASSED 00:06:19.268 malloc 8388608 00:06:19.268 register 0x200000400000 10485760 00:06:19.268 buf 0x200000600000 len 8388608 PASSED 00:06:19.268 free 0x200000600000 8388608 00:06:19.268 unregister 0x200000400000 10485760 PASSED 00:06:19.268 passed 00:06:19.268 00:06:19.268 Run Summary: Type Total Ran Passed Failed Inactive 00:06:19.268 suites 1 1 n/a 0 0 00:06:19.268 tests 1 1 1 0 0 00:06:19.268 asserts 15 15 15 0 n/a 00:06:19.268 00:06:19.268 Elapsed time = 0.006 seconds 00:06:19.268 00:06:19.268 real 0m0.140s 00:06:19.268 user 0m0.018s 00:06:19.268 sys 0m0.021s 00:06:19.268 16:01:25 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.268 ************************************ 00:06:19.268 END TEST env_mem_callbacks 00:06:19.268 ************************************ 00:06:19.268 16:01:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:19.526 00:06:19.526 real 0m1.908s 00:06:19.526 user 0m0.956s 00:06:19.526 sys 0m0.596s 00:06:19.526 16:01:26 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.526 ************************************ 00:06:19.526 END TEST env 00:06:19.526 16:01:26 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.526 ************************************ 00:06:19.526 16:01:26 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:19.526 16:01:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.526 16:01:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.526 16:01:26 -- common/autotest_common.sh@10 -- # set +x 00:06:19.526 ************************************ 00:06:19.526 START TEST rpc 00:06:19.526 ************************************ 00:06:19.526 16:01:26 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:19.526 * Looking for test storage... 00:06:19.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:19.526 16:01:26 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:19.526 16:01:26 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:19.526 16:01:26 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:19.526 16:01:26 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:19.526 16:01:26 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.526 16:01:26 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.526 16:01:26 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.526 16:01:26 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.526 16:01:26 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.526 16:01:26 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.526 16:01:26 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.526 16:01:26 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.526 16:01:26 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.526 16:01:26 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.526 16:01:26 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.526 16:01:26 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:19.526 16:01:26 rpc -- scripts/common.sh@345 -- # : 1 00:06:19.526 16:01:26 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.526 16:01:26 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.785 16:01:26 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:19.785 16:01:26 rpc -- scripts/common.sh@353 -- # local d=1 00:06:19.785 16:01:26 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.785 16:01:26 rpc -- scripts/common.sh@355 -- # echo 1 00:06:19.785 16:01:26 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.785 16:01:26 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:19.785 16:01:26 rpc -- scripts/common.sh@353 -- # local d=2 00:06:19.785 16:01:26 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.785 16:01:26 rpc -- scripts/common.sh@355 -- # echo 2 00:06:19.785 16:01:26 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.785 16:01:26 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.785 16:01:26 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.785 16:01:26 rpc -- scripts/common.sh@368 -- # return 0 00:06:19.785 16:01:26 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.785 16:01:26 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:19.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.785 --rc genhtml_branch_coverage=1 00:06:19.785 --rc genhtml_function_coverage=1 00:06:19.785 --rc genhtml_legend=1 00:06:19.785 --rc geninfo_all_blocks=1 00:06:19.785 --rc geninfo_unexecuted_blocks=1 00:06:19.785 00:06:19.785 ' 00:06:19.785 16:01:26 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:19.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.785 --rc genhtml_branch_coverage=1 00:06:19.785 --rc genhtml_function_coverage=1 00:06:19.785 --rc genhtml_legend=1 00:06:19.785 --rc geninfo_all_blocks=1 00:06:19.785 --rc geninfo_unexecuted_blocks=1 00:06:19.785 00:06:19.785 ' 00:06:19.785 16:01:26 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:19.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.785 --rc genhtml_branch_coverage=1 00:06:19.785 --rc genhtml_function_coverage=1 00:06:19.785 --rc genhtml_legend=1 00:06:19.785 --rc geninfo_all_blocks=1 00:06:19.785 --rc geninfo_unexecuted_blocks=1 00:06:19.785 00:06:19.785 ' 00:06:19.785 16:01:26 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:19.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.785 --rc genhtml_branch_coverage=1 00:06:19.785 --rc genhtml_function_coverage=1 00:06:19.785 --rc genhtml_legend=1 00:06:19.785 --rc geninfo_all_blocks=1 00:06:19.785 --rc geninfo_unexecuted_blocks=1 00:06:19.785 00:06:19.785 ' 00:06:19.785 16:01:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=69985 00:06:19.785 16:01:26 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:19.785 16:01:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.785 16:01:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 69985 00:06:19.785 16:01:26 rpc -- common/autotest_common.sh@835 -- # '[' -z 69985 ']' 00:06:19.785 16:01:26 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.785 16:01:26 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.785 16:01:26 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.785 16:01:26 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.785 16:01:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.785 [2024-11-19 16:01:26.329201] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:19.785 [2024-11-19 16:01:26.329334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69985 ] 00:06:19.785 [2024-11-19 16:01:26.481103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.044 [2024-11-19 16:01:26.506803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:20.044 [2024-11-19 16:01:26.506881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 69985' to capture a snapshot of events at runtime. 00:06:20.044 [2024-11-19 16:01:26.506895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:20.044 [2024-11-19 16:01:26.506905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:20.044 [2024-11-19 16:01:26.506914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid69985 for offline analysis/debug. 00:06:20.044 [2024-11-19 16:01:26.507288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.044 [2024-11-19 16:01:26.549403] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.044 16:01:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.044 16:01:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:20.044 16:01:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.044 16:01:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:20.044 16:01:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:20.044 16:01:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:20.044 16:01:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.044 16:01:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.044 16:01:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.044 ************************************ 00:06:20.044 START TEST rpc_integrity 00:06:20.044 ************************************ 00:06:20.044 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:20.044 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:20.044 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.044 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.044 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.044 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:20.044 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:20.304 { 00:06:20.304 "name": "Malloc0", 00:06:20.304 "aliases": [ 00:06:20.304 "82f57106-2df6-4b8f-a33c-f00b89443f2b" 00:06:20.304 ], 00:06:20.304 "product_name": "Malloc disk", 00:06:20.304 "block_size": 512, 00:06:20.304 "num_blocks": 16384, 00:06:20.304 "uuid": "82f57106-2df6-4b8f-a33c-f00b89443f2b", 00:06:20.304 "assigned_rate_limits": { 00:06:20.304 "rw_ios_per_sec": 0, 00:06:20.304 "rw_mbytes_per_sec": 0, 00:06:20.304 "r_mbytes_per_sec": 0, 00:06:20.304 "w_mbytes_per_sec": 0 00:06:20.304 }, 00:06:20.304 "claimed": false, 00:06:20.304 "zoned": false, 00:06:20.304 "supported_io_types": { 00:06:20.304 "read": true, 00:06:20.304 "write": true, 00:06:20.304 "unmap": true, 00:06:20.304 "flush": true, 00:06:20.304 "reset": true, 00:06:20.304 "nvme_admin": false, 00:06:20.304 "nvme_io": false, 00:06:20.304 "nvme_io_md": false, 00:06:20.304 "write_zeroes": true, 00:06:20.304 "zcopy": true, 00:06:20.304 "get_zone_info": false, 00:06:20.304 "zone_management": false, 00:06:20.304 "zone_append": false, 00:06:20.304 "compare": false, 00:06:20.304 "compare_and_write": false, 00:06:20.304 "abort": true, 00:06:20.304 "seek_hole": false, 00:06:20.304 "seek_data": false, 00:06:20.304 "copy": true, 00:06:20.304 "nvme_iov_md": false 00:06:20.304 }, 00:06:20.304 "memory_domains": [ 00:06:20.304 { 00:06:20.304 "dma_device_id": "system", 00:06:20.304 "dma_device_type": 1 00:06:20.304 }, 00:06:20.304 { 00:06:20.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.304 "dma_device_type": 2 00:06:20.304 } 00:06:20.304 ], 00:06:20.304 "driver_specific": {} 00:06:20.304 } 00:06:20.304 ]' 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.304 [2024-11-19 16:01:26.846664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:20.304 [2024-11-19 16:01:26.846754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:20.304 [2024-11-19 16:01:26.846834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f4da0 00:06:20.304 [2024-11-19 16:01:26.846852] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:20.304 [2024-11-19 16:01:26.848518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:20.304 [2024-11-19 16:01:26.848557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:20.304 Passthru0 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.304 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:20.304 { 00:06:20.304 "name": "Malloc0", 00:06:20.304 "aliases": [ 00:06:20.304 "82f57106-2df6-4b8f-a33c-f00b89443f2b" 00:06:20.304 ], 00:06:20.304 "product_name": "Malloc disk", 00:06:20.304 "block_size": 512, 00:06:20.304 "num_blocks": 16384, 00:06:20.304 "uuid": "82f57106-2df6-4b8f-a33c-f00b89443f2b", 00:06:20.304 "assigned_rate_limits": { 00:06:20.304 "rw_ios_per_sec": 0, 00:06:20.304 "rw_mbytes_per_sec": 0, 00:06:20.304 "r_mbytes_per_sec": 0, 00:06:20.304 "w_mbytes_per_sec": 0 00:06:20.304 }, 00:06:20.304 "claimed": true, 00:06:20.304 "claim_type": "exclusive_write", 00:06:20.304 "zoned": false, 00:06:20.304 "supported_io_types": { 00:06:20.304 "read": true, 00:06:20.304 "write": true, 00:06:20.304 "unmap": true, 00:06:20.304 "flush": true, 00:06:20.304 "reset": true, 00:06:20.304 "nvme_admin": false, 00:06:20.304 "nvme_io": false, 00:06:20.304 "nvme_io_md": false, 00:06:20.304 "write_zeroes": true, 00:06:20.304 "zcopy": true, 00:06:20.304 "get_zone_info": false, 00:06:20.304 "zone_management": false, 00:06:20.304 "zone_append": false, 00:06:20.304 "compare": false, 00:06:20.304 "compare_and_write": false, 00:06:20.304 "abort": true, 00:06:20.304 "seek_hole": false, 00:06:20.304 "seek_data": false, 00:06:20.304 "copy": true, 00:06:20.304 "nvme_iov_md": false 00:06:20.304 }, 00:06:20.304 "memory_domains": [ 00:06:20.304 { 00:06:20.304 "dma_device_id": "system", 00:06:20.304 "dma_device_type": 1 00:06:20.304 }, 00:06:20.304 { 00:06:20.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.304 "dma_device_type": 2 00:06:20.304 } 00:06:20.304 ], 00:06:20.304 "driver_specific": {} 00:06:20.304 }, 00:06:20.304 { 00:06:20.304 "name": "Passthru0", 00:06:20.304 "aliases": [ 00:06:20.304 "12a4386b-cef2-5ad9-9628-37aaed18273d" 00:06:20.304 ], 00:06:20.304 "product_name": "passthru", 00:06:20.304 "block_size": 512, 00:06:20.304 "num_blocks": 16384, 00:06:20.304 "uuid": "12a4386b-cef2-5ad9-9628-37aaed18273d", 00:06:20.304 "assigned_rate_limits": { 00:06:20.304 "rw_ios_per_sec": 0, 00:06:20.304 "rw_mbytes_per_sec": 0, 00:06:20.304 "r_mbytes_per_sec": 0, 00:06:20.304 "w_mbytes_per_sec": 0 00:06:20.304 }, 00:06:20.304 "claimed": false, 00:06:20.304 "zoned": false, 00:06:20.304 "supported_io_types": { 00:06:20.304 "read": true, 00:06:20.304 "write": true, 00:06:20.304 "unmap": true, 00:06:20.304 "flush": true, 00:06:20.304 "reset": true, 00:06:20.304 "nvme_admin": false, 00:06:20.304 "nvme_io": false, 00:06:20.304 "nvme_io_md": false, 00:06:20.304 "write_zeroes": true, 00:06:20.304 "zcopy": true, 00:06:20.304 "get_zone_info": false, 00:06:20.304 "zone_management": false, 00:06:20.304 "zone_append": false, 00:06:20.304 "compare": false, 00:06:20.304 "compare_and_write": false, 00:06:20.304 "abort": true, 00:06:20.304 "seek_hole": false, 00:06:20.304 "seek_data": false, 00:06:20.304 "copy": true, 00:06:20.304 "nvme_iov_md": false 00:06:20.304 }, 00:06:20.304 "memory_domains": [ 00:06:20.304 { 00:06:20.304 "dma_device_id": "system", 00:06:20.304 "dma_device_type": 1 00:06:20.304 }, 00:06:20.304 { 00:06:20.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.304 "dma_device_type": 2 00:06:20.304 } 00:06:20.304 ], 00:06:20.304 "driver_specific": { 00:06:20.304 "passthru": { 00:06:20.304 "name": "Passthru0", 00:06:20.304 "base_bdev_name": "Malloc0" 00:06:20.304 } 00:06:20.304 } 00:06:20.304 } 00:06:20.304 ]' 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:20.304 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:20.305 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:20.305 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.305 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.305 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.305 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:20.305 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.305 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.305 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.305 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:20.305 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.305 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.305 16:01:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.305 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:20.305 16:01:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:20.305 16:01:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:20.305 00:06:20.305 real 0m0.316s 00:06:20.305 user 0m0.213s 00:06:20.305 sys 0m0.039s 00:06:20.305 16:01:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.305 ************************************ 00:06:20.305 END TEST rpc_integrity 00:06:20.305 16:01:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:20.305 ************************************ 00:06:20.564 16:01:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:20.564 16:01:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.564 16:01:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.564 16:01:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.564 ************************************ 00:06:20.564 START TEST rpc_plugins 00:06:20.564 ************************************ 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:20.564 16:01:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.564 16:01:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:20.564 16:01:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.564 16:01:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:20.564 { 00:06:20.564 "name": "Malloc1", 00:06:20.564 "aliases": [ 00:06:20.564 "24d48c6f-06c1-4619-ad36-84b58db0a8f6" 00:06:20.564 ], 00:06:20.564 "product_name": "Malloc disk", 00:06:20.564 "block_size": 4096, 00:06:20.564 "num_blocks": 256, 00:06:20.564 "uuid": "24d48c6f-06c1-4619-ad36-84b58db0a8f6", 00:06:20.564 "assigned_rate_limits": { 00:06:20.564 "rw_ios_per_sec": 0, 00:06:20.564 "rw_mbytes_per_sec": 0, 00:06:20.564 "r_mbytes_per_sec": 0, 00:06:20.564 "w_mbytes_per_sec": 0 00:06:20.564 }, 00:06:20.564 "claimed": false, 00:06:20.564 "zoned": false, 00:06:20.564 "supported_io_types": { 00:06:20.564 "read": true, 00:06:20.564 "write": true, 00:06:20.564 "unmap": true, 00:06:20.564 "flush": true, 00:06:20.564 "reset": true, 00:06:20.564 "nvme_admin": false, 00:06:20.564 "nvme_io": false, 00:06:20.564 "nvme_io_md": false, 00:06:20.564 "write_zeroes": true, 00:06:20.564 "zcopy": true, 00:06:20.564 "get_zone_info": false, 00:06:20.564 "zone_management": false, 00:06:20.564 "zone_append": false, 00:06:20.564 "compare": false, 00:06:20.564 "compare_and_write": false, 00:06:20.564 "abort": true, 00:06:20.564 "seek_hole": false, 00:06:20.564 "seek_data": false, 00:06:20.564 "copy": true, 00:06:20.564 "nvme_iov_md": false 00:06:20.564 }, 00:06:20.564 "memory_domains": [ 00:06:20.564 { 00:06:20.564 "dma_device_id": "system", 00:06:20.564 "dma_device_type": 1 00:06:20.564 }, 00:06:20.564 { 00:06:20.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:20.564 "dma_device_type": 2 00:06:20.564 } 00:06:20.564 ], 00:06:20.564 "driver_specific": {} 00:06:20.564 } 00:06:20.564 ]' 00:06:20.564 16:01:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:20.564 16:01:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:20.564 16:01:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.564 16:01:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.564 16:01:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:20.564 16:01:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:20.564 16:01:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:20.564 00:06:20.564 real 0m0.153s 00:06:20.564 user 0m0.098s 00:06:20.564 sys 0m0.019s 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.564 16:01:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:20.564 ************************************ 00:06:20.564 END TEST rpc_plugins 00:06:20.564 ************************************ 00:06:20.564 16:01:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:20.564 16:01:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.564 16:01:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.564 16:01:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.564 ************************************ 00:06:20.564 START TEST rpc_trace_cmd_test 00:06:20.564 ************************************ 00:06:20.564 16:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:20.564 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:20.564 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:20.564 16:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:20.564 16:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:20.823 16:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:20.823 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:20.823 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid69985", 00:06:20.823 "tpoint_group_mask": "0x8", 00:06:20.823 "iscsi_conn": { 00:06:20.823 "mask": "0x2", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "scsi": { 00:06:20.823 "mask": "0x4", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "bdev": { 00:06:20.823 "mask": "0x8", 00:06:20.823 "tpoint_mask": "0xffffffffffffffff" 00:06:20.823 }, 00:06:20.823 "nvmf_rdma": { 00:06:20.823 "mask": "0x10", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "nvmf_tcp": { 00:06:20.823 "mask": "0x20", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "ftl": { 00:06:20.823 "mask": "0x40", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "blobfs": { 00:06:20.823 "mask": "0x80", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "dsa": { 00:06:20.823 "mask": "0x200", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "thread": { 00:06:20.823 "mask": "0x400", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "nvme_pcie": { 00:06:20.823 "mask": "0x800", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "iaa": { 00:06:20.823 "mask": "0x1000", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "nvme_tcp": { 00:06:20.823 "mask": "0x2000", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "bdev_nvme": { 00:06:20.823 "mask": "0x4000", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "sock": { 00:06:20.823 "mask": "0x8000", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "blob": { 00:06:20.823 "mask": "0x10000", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "bdev_raid": { 00:06:20.823 "mask": "0x20000", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 }, 00:06:20.823 "scheduler": { 00:06:20.823 "mask": "0x40000", 00:06:20.823 "tpoint_mask": "0x0" 00:06:20.823 } 00:06:20.823 }' 00:06:20.823 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:20.823 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:20.823 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:20.823 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:20.823 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:20.823 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:20.823 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:20.823 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:20.823 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:21.083 16:01:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:21.083 00:06:21.083 real 0m0.271s 00:06:21.083 user 0m0.237s 00:06:21.083 sys 0m0.025s 00:06:21.083 16:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.083 16:01:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:21.083 ************************************ 00:06:21.083 END TEST rpc_trace_cmd_test 00:06:21.083 ************************************ 00:06:21.083 16:01:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:21.083 16:01:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:21.083 16:01:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:21.083 16:01:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.083 16:01:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.083 16:01:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.083 ************************************ 00:06:21.083 START TEST rpc_daemon_integrity 00:06:21.083 ************************************ 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:21.083 { 00:06:21.083 "name": "Malloc2", 00:06:21.083 "aliases": [ 00:06:21.083 "1bbeca46-c385-478c-a74a-1ccd7ad3a1a8" 00:06:21.083 ], 00:06:21.083 "product_name": "Malloc disk", 00:06:21.083 "block_size": 512, 00:06:21.083 "num_blocks": 16384, 00:06:21.083 "uuid": "1bbeca46-c385-478c-a74a-1ccd7ad3a1a8", 00:06:21.083 "assigned_rate_limits": { 00:06:21.083 "rw_ios_per_sec": 0, 00:06:21.083 "rw_mbytes_per_sec": 0, 00:06:21.083 "r_mbytes_per_sec": 0, 00:06:21.083 "w_mbytes_per_sec": 0 00:06:21.083 }, 00:06:21.083 "claimed": false, 00:06:21.083 "zoned": false, 00:06:21.083 "supported_io_types": { 00:06:21.083 "read": true, 00:06:21.083 "write": true, 00:06:21.083 "unmap": true, 00:06:21.083 "flush": true, 00:06:21.083 "reset": true, 00:06:21.083 "nvme_admin": false, 00:06:21.083 "nvme_io": false, 00:06:21.083 "nvme_io_md": false, 00:06:21.083 "write_zeroes": true, 00:06:21.083 "zcopy": true, 00:06:21.083 "get_zone_info": false, 00:06:21.083 "zone_management": false, 00:06:21.083 "zone_append": false, 00:06:21.083 "compare": false, 00:06:21.083 "compare_and_write": false, 00:06:21.083 "abort": true, 00:06:21.083 "seek_hole": false, 00:06:21.083 "seek_data": false, 00:06:21.083 "copy": true, 00:06:21.083 "nvme_iov_md": false 00:06:21.083 }, 00:06:21.083 "memory_domains": [ 00:06:21.083 { 00:06:21.083 "dma_device_id": "system", 00:06:21.083 "dma_device_type": 1 00:06:21.083 }, 00:06:21.083 { 00:06:21.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.083 "dma_device_type": 2 00:06:21.083 } 00:06:21.083 ], 00:06:21.083 "driver_specific": {} 00:06:21.083 } 00:06:21.083 ]' 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.083 [2024-11-19 16:01:27.735101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:21.083 [2024-11-19 16:01:27.735220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:21.083 [2024-11-19 16:01:27.735239] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f4a60 00:06:21.083 [2024-11-19 16:01:27.735259] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:21.083 [2024-11-19 16:01:27.736659] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:21.083 [2024-11-19 16:01:27.736706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:21.083 Passthru0 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.083 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:21.083 { 00:06:21.083 "name": "Malloc2", 00:06:21.083 "aliases": [ 00:06:21.083 "1bbeca46-c385-478c-a74a-1ccd7ad3a1a8" 00:06:21.083 ], 00:06:21.083 "product_name": "Malloc disk", 00:06:21.083 "block_size": 512, 00:06:21.083 "num_blocks": 16384, 00:06:21.083 "uuid": "1bbeca46-c385-478c-a74a-1ccd7ad3a1a8", 00:06:21.083 "assigned_rate_limits": { 00:06:21.083 "rw_ios_per_sec": 0, 00:06:21.083 "rw_mbytes_per_sec": 0, 00:06:21.083 "r_mbytes_per_sec": 0, 00:06:21.083 "w_mbytes_per_sec": 0 00:06:21.083 }, 00:06:21.083 "claimed": true, 00:06:21.083 "claim_type": "exclusive_write", 00:06:21.083 "zoned": false, 00:06:21.083 "supported_io_types": { 00:06:21.083 "read": true, 00:06:21.083 "write": true, 00:06:21.083 "unmap": true, 00:06:21.083 "flush": true, 00:06:21.083 "reset": true, 00:06:21.083 "nvme_admin": false, 00:06:21.083 "nvme_io": false, 00:06:21.083 "nvme_io_md": false, 00:06:21.083 "write_zeroes": true, 00:06:21.083 "zcopy": true, 00:06:21.083 "get_zone_info": false, 00:06:21.083 "zone_management": false, 00:06:21.083 "zone_append": false, 00:06:21.083 "compare": false, 00:06:21.083 "compare_and_write": false, 00:06:21.083 "abort": true, 00:06:21.083 "seek_hole": false, 00:06:21.083 "seek_data": false, 00:06:21.083 "copy": true, 00:06:21.083 "nvme_iov_md": false 00:06:21.083 }, 00:06:21.083 "memory_domains": [ 00:06:21.083 { 00:06:21.083 "dma_device_id": "system", 00:06:21.083 "dma_device_type": 1 00:06:21.083 }, 00:06:21.083 { 00:06:21.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.083 "dma_device_type": 2 00:06:21.083 } 00:06:21.083 ], 00:06:21.083 "driver_specific": {} 00:06:21.083 }, 00:06:21.083 { 00:06:21.083 "name": "Passthru0", 00:06:21.083 "aliases": [ 00:06:21.083 "d2a84fab-a022-5b9c-9812-868ef4182a13" 00:06:21.083 ], 00:06:21.083 "product_name": "passthru", 00:06:21.083 "block_size": 512, 00:06:21.083 "num_blocks": 16384, 00:06:21.083 "uuid": "d2a84fab-a022-5b9c-9812-868ef4182a13", 00:06:21.083 "assigned_rate_limits": { 00:06:21.083 "rw_ios_per_sec": 0, 00:06:21.083 "rw_mbytes_per_sec": 0, 00:06:21.083 "r_mbytes_per_sec": 0, 00:06:21.083 "w_mbytes_per_sec": 0 00:06:21.083 }, 00:06:21.083 "claimed": false, 00:06:21.083 "zoned": false, 00:06:21.083 "supported_io_types": { 00:06:21.083 "read": true, 00:06:21.083 "write": true, 00:06:21.083 "unmap": true, 00:06:21.083 "flush": true, 00:06:21.083 "reset": true, 00:06:21.083 "nvme_admin": false, 00:06:21.083 "nvme_io": false, 00:06:21.083 "nvme_io_md": false, 00:06:21.083 "write_zeroes": true, 00:06:21.083 "zcopy": true, 00:06:21.083 "get_zone_info": false, 00:06:21.084 "zone_management": false, 00:06:21.084 "zone_append": false, 00:06:21.084 "compare": false, 00:06:21.084 "compare_and_write": false, 00:06:21.084 "abort": true, 00:06:21.084 "seek_hole": false, 00:06:21.084 "seek_data": false, 00:06:21.084 "copy": true, 00:06:21.084 "nvme_iov_md": false 00:06:21.084 }, 00:06:21.084 "memory_domains": [ 00:06:21.084 { 00:06:21.084 "dma_device_id": "system", 00:06:21.084 "dma_device_type": 1 00:06:21.084 }, 00:06:21.084 { 00:06:21.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:21.084 "dma_device_type": 2 00:06:21.084 } 00:06:21.084 ], 00:06:21.084 "driver_specific": { 00:06:21.084 "passthru": { 00:06:21.084 "name": "Passthru0", 00:06:21.084 "base_bdev_name": "Malloc2" 00:06:21.084 } 00:06:21.084 } 00:06:21.084 } 00:06:21.084 ]' 00:06:21.084 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:21.343 00:06:21.343 real 0m0.308s 00:06:21.343 user 0m0.211s 00:06:21.343 sys 0m0.033s 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.343 16:01:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:21.343 ************************************ 00:06:21.343 END TEST rpc_daemon_integrity 00:06:21.343 ************************************ 00:06:21.343 16:01:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:21.343 16:01:27 rpc -- rpc/rpc.sh@84 -- # killprocess 69985 00:06:21.343 16:01:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 69985 ']' 00:06:21.343 16:01:27 rpc -- common/autotest_common.sh@958 -- # kill -0 69985 00:06:21.343 16:01:27 rpc -- common/autotest_common.sh@959 -- # uname 00:06:21.343 16:01:27 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.343 16:01:27 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69985 00:06:21.343 16:01:27 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.343 16:01:27 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.343 killing process with pid 69985 00:06:21.343 16:01:27 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69985' 00:06:21.343 16:01:27 rpc -- common/autotest_common.sh@973 -- # kill 69985 00:06:21.343 16:01:27 rpc -- common/autotest_common.sh@978 -- # wait 69985 00:06:21.602 00:06:21.602 real 0m2.135s 00:06:21.602 user 0m2.857s 00:06:21.602 sys 0m0.576s 00:06:21.602 16:01:28 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.602 16:01:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.602 ************************************ 00:06:21.602 END TEST rpc 00:06:21.602 ************************************ 00:06:21.602 16:01:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:21.602 16:01:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.602 16:01:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.602 16:01:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.602 ************************************ 00:06:21.602 START TEST skip_rpc 00:06:21.602 ************************************ 00:06:21.602 16:01:28 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:21.861 * Looking for test storage... 00:06:21.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.861 16:01:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.861 --rc genhtml_branch_coverage=1 00:06:21.861 --rc genhtml_function_coverage=1 00:06:21.861 --rc genhtml_legend=1 00:06:21.861 --rc geninfo_all_blocks=1 00:06:21.861 --rc geninfo_unexecuted_blocks=1 00:06:21.861 00:06:21.861 ' 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.861 --rc genhtml_branch_coverage=1 00:06:21.861 --rc genhtml_function_coverage=1 00:06:21.861 --rc genhtml_legend=1 00:06:21.861 --rc geninfo_all_blocks=1 00:06:21.861 --rc geninfo_unexecuted_blocks=1 00:06:21.861 00:06:21.861 ' 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.861 --rc genhtml_branch_coverage=1 00:06:21.861 --rc genhtml_function_coverage=1 00:06:21.861 --rc genhtml_legend=1 00:06:21.861 --rc geninfo_all_blocks=1 00:06:21.861 --rc geninfo_unexecuted_blocks=1 00:06:21.861 00:06:21.861 ' 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.861 --rc genhtml_branch_coverage=1 00:06:21.861 --rc genhtml_function_coverage=1 00:06:21.861 --rc genhtml_legend=1 00:06:21.861 --rc geninfo_all_blocks=1 00:06:21.861 --rc geninfo_unexecuted_blocks=1 00:06:21.861 00:06:21.861 ' 00:06:21.861 16:01:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:21.861 16:01:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:21.861 16:01:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.861 16:01:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.861 ************************************ 00:06:21.861 START TEST skip_rpc 00:06:21.861 ************************************ 00:06:21.861 16:01:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:21.861 16:01:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=70184 00:06:21.861 16:01:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.861 16:01:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:21.861 16:01:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:21.861 [2024-11-19 16:01:28.508661] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:21.861 [2024-11-19 16:01:28.508956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70184 ] 00:06:22.119 [2024-11-19 16:01:28.659071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.120 [2024-11-19 16:01:28.682091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.120 [2024-11-19 16:01:28.720205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 70184 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 70184 ']' 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 70184 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70184 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70184' 00:06:27.395 killing process with pid 70184 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 70184 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 70184 00:06:27.395 00:06:27.395 ************************************ 00:06:27.395 END TEST skip_rpc 00:06:27.395 ************************************ 00:06:27.395 real 0m5.274s 00:06:27.395 user 0m5.008s 00:06:27.395 sys 0m0.184s 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.395 16:01:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.395 16:01:33 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:27.395 16:01:33 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.395 16:01:33 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.395 16:01:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.395 ************************************ 00:06:27.395 START TEST skip_rpc_with_json 00:06:27.395 ************************************ 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:27.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=70265 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 70265 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 70265 ']' 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.395 16:01:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.395 [2024-11-19 16:01:33.839107] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:27.395 [2024-11-19 16:01:33.839472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70265 ] 00:06:27.395 [2024-11-19 16:01:33.990537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.395 [2024-11-19 16:01:34.013028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.395 [2024-11-19 16:01:34.048992] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.655 [2024-11-19 16:01:34.168323] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:27.655 request: 00:06:27.655 { 00:06:27.655 "trtype": "tcp", 00:06:27.655 "method": "nvmf_get_transports", 00:06:27.655 "req_id": 1 00:06:27.655 } 00:06:27.655 Got JSON-RPC error response 00:06:27.655 response: 00:06:27.655 { 00:06:27.655 "code": -19, 00:06:27.655 "message": "No such device" 00:06:27.655 } 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.655 [2024-11-19 16:01:34.180424] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.655 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:27.915 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.915 16:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:27.915 { 00:06:27.915 "subsystems": [ 00:06:27.915 { 00:06:27.915 "subsystem": "fsdev", 00:06:27.915 "config": [ 00:06:27.915 { 00:06:27.915 "method": "fsdev_set_opts", 00:06:27.915 "params": { 00:06:27.915 "fsdev_io_pool_size": 65535, 00:06:27.915 "fsdev_io_cache_size": 256 00:06:27.915 } 00:06:27.915 } 00:06:27.915 ] 00:06:27.915 }, 00:06:27.915 { 00:06:27.915 "subsystem": "vfio_user_target", 00:06:27.915 "config": null 00:06:27.915 }, 00:06:27.915 { 00:06:27.915 "subsystem": "keyring", 00:06:27.915 "config": [] 00:06:27.915 }, 00:06:27.915 { 00:06:27.915 "subsystem": "iobuf", 00:06:27.915 "config": [ 00:06:27.915 { 00:06:27.916 "method": "iobuf_set_options", 00:06:27.916 "params": { 00:06:27.916 "small_pool_count": 8192, 00:06:27.916 "large_pool_count": 1024, 00:06:27.916 "small_bufsize": 8192, 00:06:27.916 "large_bufsize": 135168, 00:06:27.916 "enable_numa": false 00:06:27.916 } 00:06:27.916 } 00:06:27.916 ] 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "sock", 00:06:27.916 "config": [ 00:06:27.916 { 00:06:27.916 "method": "sock_set_default_impl", 00:06:27.916 "params": { 00:06:27.916 "impl_name": "uring" 00:06:27.916 } 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "method": "sock_impl_set_options", 00:06:27.916 "params": { 00:06:27.916 "impl_name": "ssl", 00:06:27.916 "recv_buf_size": 4096, 00:06:27.916 "send_buf_size": 4096, 00:06:27.916 "enable_recv_pipe": true, 00:06:27.916 "enable_quickack": false, 00:06:27.916 "enable_placement_id": 0, 00:06:27.916 "enable_zerocopy_send_server": true, 00:06:27.916 "enable_zerocopy_send_client": false, 00:06:27.916 "zerocopy_threshold": 0, 00:06:27.916 "tls_version": 0, 00:06:27.916 "enable_ktls": false 00:06:27.916 } 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "method": "sock_impl_set_options", 00:06:27.916 "params": { 00:06:27.916 "impl_name": "posix", 00:06:27.916 "recv_buf_size": 2097152, 00:06:27.916 "send_buf_size": 2097152, 00:06:27.916 "enable_recv_pipe": true, 00:06:27.916 "enable_quickack": false, 00:06:27.916 "enable_placement_id": 0, 00:06:27.916 "enable_zerocopy_send_server": true, 00:06:27.916 "enable_zerocopy_send_client": false, 00:06:27.916 "zerocopy_threshold": 0, 00:06:27.916 "tls_version": 0, 00:06:27.916 "enable_ktls": false 00:06:27.916 } 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "method": "sock_impl_set_options", 00:06:27.916 "params": { 00:06:27.916 "impl_name": "uring", 00:06:27.916 "recv_buf_size": 2097152, 00:06:27.916 "send_buf_size": 2097152, 00:06:27.916 "enable_recv_pipe": true, 00:06:27.916 "enable_quickack": false, 00:06:27.916 "enable_placement_id": 0, 00:06:27.916 "enable_zerocopy_send_server": false, 00:06:27.916 "enable_zerocopy_send_client": false, 00:06:27.916 "zerocopy_threshold": 0, 00:06:27.916 "tls_version": 0, 00:06:27.916 "enable_ktls": false 00:06:27.916 } 00:06:27.916 } 00:06:27.916 ] 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "vmd", 00:06:27.916 "config": [] 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "accel", 00:06:27.916 "config": [ 00:06:27.916 { 00:06:27.916 "method": "accel_set_options", 00:06:27.916 "params": { 00:06:27.916 "small_cache_size": 128, 00:06:27.916 "large_cache_size": 16, 00:06:27.916 "task_count": 2048, 00:06:27.916 "sequence_count": 2048, 00:06:27.916 "buf_count": 2048 00:06:27.916 } 00:06:27.916 } 00:06:27.916 ] 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "bdev", 00:06:27.916 "config": [ 00:06:27.916 { 00:06:27.916 "method": "bdev_set_options", 00:06:27.916 "params": { 00:06:27.916 "bdev_io_pool_size": 65535, 00:06:27.916 "bdev_io_cache_size": 256, 00:06:27.916 "bdev_auto_examine": true, 00:06:27.916 "iobuf_small_cache_size": 128, 00:06:27.916 "iobuf_large_cache_size": 16 00:06:27.916 } 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "method": "bdev_raid_set_options", 00:06:27.916 "params": { 00:06:27.916 "process_window_size_kb": 1024, 00:06:27.916 "process_max_bandwidth_mb_sec": 0 00:06:27.916 } 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "method": "bdev_iscsi_set_options", 00:06:27.916 "params": { 00:06:27.916 "timeout_sec": 30 00:06:27.916 } 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "method": "bdev_nvme_set_options", 00:06:27.916 "params": { 00:06:27.916 "action_on_timeout": "none", 00:06:27.916 "timeout_us": 0, 00:06:27.916 "timeout_admin_us": 0, 00:06:27.916 "keep_alive_timeout_ms": 10000, 00:06:27.916 "arbitration_burst": 0, 00:06:27.916 "low_priority_weight": 0, 00:06:27.916 "medium_priority_weight": 0, 00:06:27.916 "high_priority_weight": 0, 00:06:27.916 "nvme_adminq_poll_period_us": 10000, 00:06:27.916 "nvme_ioq_poll_period_us": 0, 00:06:27.916 "io_queue_requests": 0, 00:06:27.916 "delay_cmd_submit": true, 00:06:27.916 "transport_retry_count": 4, 00:06:27.916 "bdev_retry_count": 3, 00:06:27.916 "transport_ack_timeout": 0, 00:06:27.916 "ctrlr_loss_timeout_sec": 0, 00:06:27.916 "reconnect_delay_sec": 0, 00:06:27.916 "fast_io_fail_timeout_sec": 0, 00:06:27.916 "disable_auto_failback": false, 00:06:27.916 "generate_uuids": false, 00:06:27.916 "transport_tos": 0, 00:06:27.916 "nvme_error_stat": false, 00:06:27.916 "rdma_srq_size": 0, 00:06:27.916 "io_path_stat": false, 00:06:27.916 "allow_accel_sequence": false, 00:06:27.916 "rdma_max_cq_size": 0, 00:06:27.916 "rdma_cm_event_timeout_ms": 0, 00:06:27.916 "dhchap_digests": [ 00:06:27.916 "sha256", 00:06:27.916 "sha384", 00:06:27.916 "sha512" 00:06:27.916 ], 00:06:27.916 "dhchap_dhgroups": [ 00:06:27.916 "null", 00:06:27.916 "ffdhe2048", 00:06:27.916 "ffdhe3072", 00:06:27.916 "ffdhe4096", 00:06:27.916 "ffdhe6144", 00:06:27.916 "ffdhe8192" 00:06:27.916 ] 00:06:27.916 } 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "method": "bdev_nvme_set_hotplug", 00:06:27.916 "params": { 00:06:27.916 "period_us": 100000, 00:06:27.916 "enable": false 00:06:27.916 } 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "method": "bdev_wait_for_examine" 00:06:27.916 } 00:06:27.916 ] 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "scsi", 00:06:27.916 "config": null 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "scheduler", 00:06:27.916 "config": [ 00:06:27.916 { 00:06:27.916 "method": "framework_set_scheduler", 00:06:27.916 "params": { 00:06:27.916 "name": "static" 00:06:27.916 } 00:06:27.916 } 00:06:27.916 ] 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "vhost_scsi", 00:06:27.916 "config": [] 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "vhost_blk", 00:06:27.916 "config": [] 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "ublk", 00:06:27.916 "config": [] 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "nbd", 00:06:27.916 "config": [] 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "nvmf", 00:06:27.916 "config": [ 00:06:27.916 { 00:06:27.916 "method": "nvmf_set_config", 00:06:27.916 "params": { 00:06:27.916 "discovery_filter": "match_any", 00:06:27.916 "admin_cmd_passthru": { 00:06:27.916 "identify_ctrlr": false 00:06:27.916 }, 00:06:27.916 "dhchap_digests": [ 00:06:27.916 "sha256", 00:06:27.916 "sha384", 00:06:27.916 "sha512" 00:06:27.916 ], 00:06:27.916 "dhchap_dhgroups": [ 00:06:27.916 "null", 00:06:27.916 "ffdhe2048", 00:06:27.916 "ffdhe3072", 00:06:27.916 "ffdhe4096", 00:06:27.916 "ffdhe6144", 00:06:27.916 "ffdhe8192" 00:06:27.916 ] 00:06:27.916 } 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "method": "nvmf_set_max_subsystems", 00:06:27.916 "params": { 00:06:27.916 "max_subsystems": 1024 00:06:27.916 } 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "method": "nvmf_set_crdt", 00:06:27.916 "params": { 00:06:27.916 "crdt1": 0, 00:06:27.916 "crdt2": 0, 00:06:27.916 "crdt3": 0 00:06:27.916 } 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "method": "nvmf_create_transport", 00:06:27.916 "params": { 00:06:27.916 "trtype": "TCP", 00:06:27.916 "max_queue_depth": 128, 00:06:27.916 "max_io_qpairs_per_ctrlr": 127, 00:06:27.916 "in_capsule_data_size": 4096, 00:06:27.916 "max_io_size": 131072, 00:06:27.916 "io_unit_size": 131072, 00:06:27.916 "max_aq_depth": 128, 00:06:27.916 "num_shared_buffers": 511, 00:06:27.916 "buf_cache_size": 4294967295, 00:06:27.916 "dif_insert_or_strip": false, 00:06:27.916 "zcopy": false, 00:06:27.916 "c2h_success": true, 00:06:27.916 "sock_priority": 0, 00:06:27.916 "abort_timeout_sec": 1, 00:06:27.916 "ack_timeout": 0, 00:06:27.916 "data_wr_pool_size": 0 00:06:27.916 } 00:06:27.916 } 00:06:27.916 ] 00:06:27.916 }, 00:06:27.916 { 00:06:27.916 "subsystem": "iscsi", 00:06:27.917 "config": [ 00:06:27.917 { 00:06:27.917 "method": "iscsi_set_options", 00:06:27.917 "params": { 00:06:27.917 "node_base": "iqn.2016-06.io.spdk", 00:06:27.917 "max_sessions": 128, 00:06:27.917 "max_connections_per_session": 2, 00:06:27.917 "max_queue_depth": 64, 00:06:27.917 "default_time2wait": 2, 00:06:27.917 "default_time2retain": 20, 00:06:27.917 "first_burst_length": 8192, 00:06:27.917 "immediate_data": true, 00:06:27.917 "allow_duplicated_isid": false, 00:06:27.917 "error_recovery_level": 0, 00:06:27.917 "nop_timeout": 60, 00:06:27.917 "nop_in_interval": 30, 00:06:27.917 "disable_chap": false, 00:06:27.917 "require_chap": false, 00:06:27.917 "mutual_chap": false, 00:06:27.917 "chap_group": 0, 00:06:27.917 "max_large_datain_per_connection": 64, 00:06:27.917 "max_r2t_per_connection": 4, 00:06:27.917 "pdu_pool_size": 36864, 00:06:27.917 "immediate_data_pool_size": 16384, 00:06:27.917 "data_out_pool_size": 2048 00:06:27.917 } 00:06:27.917 } 00:06:27.917 ] 00:06:27.917 } 00:06:27.917 ] 00:06:27.917 } 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 70265 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 70265 ']' 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 70265 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70265 00:06:27.917 killing process with pid 70265 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70265' 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 70265 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 70265 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=70285 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:27.917 16:01:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 70285 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 70285 ']' 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 70285 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70285 00:06:33.219 killing process with pid 70285 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70285' 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 70285 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 70285 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:33.219 ************************************ 00:06:33.219 END TEST skip_rpc_with_json 00:06:33.219 ************************************ 00:06:33.219 00:06:33.219 real 0m6.117s 00:06:33.219 user 0m5.873s 00:06:33.219 sys 0m0.430s 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.219 16:01:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.219 16:01:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:33.219 16:01:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.219 16:01:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.219 16:01:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.478 ************************************ 00:06:33.478 START TEST skip_rpc_with_delay 00:06:33.478 ************************************ 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:33.478 16:01:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:33.478 [2024-11-19 16:01:39.996875] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:33.478 16:01:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:33.478 16:01:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.478 16:01:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.478 16:01:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.478 00:06:33.478 real 0m0.079s 00:06:33.478 user 0m0.057s 00:06:33.478 sys 0m0.021s 00:06:33.478 16:01:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.478 16:01:40 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:33.478 ************************************ 00:06:33.478 END TEST skip_rpc_with_delay 00:06:33.478 ************************************ 00:06:33.478 16:01:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:33.478 16:01:40 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:33.478 16:01:40 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:33.478 16:01:40 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.478 16:01:40 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.478 16:01:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.478 ************************************ 00:06:33.478 START TEST exit_on_failed_rpc_init 00:06:33.478 ************************************ 00:06:33.478 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:33.478 16:01:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=70389 00:06:33.478 16:01:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.478 16:01:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 70389 00:06:33.478 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 70389 ']' 00:06:33.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.478 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.478 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.478 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.478 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.478 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:33.478 [2024-11-19 16:01:40.133008] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:33.478 [2024-11-19 16:01:40.133294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70389 ] 00:06:33.737 [2024-11-19 16:01:40.272963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.737 [2024-11-19 16:01:40.293767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.737 [2024-11-19 16:01:40.331126] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.737 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.737 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:33.737 16:01:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.737 16:01:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.737 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:33.737 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.737 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.737 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.737 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.996 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.996 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.996 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.996 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:33.996 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:33.996 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:33.996 [2024-11-19 16:01:40.502131] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:33.996 [2024-11-19 16:01:40.502220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70399 ] 00:06:33.996 [2024-11-19 16:01:40.650931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.996 [2024-11-19 16:01:40.676454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.996 [2024-11-19 16:01:40.676770] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:33.996 [2024-11-19 16:01:40.677027] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:33.996 [2024-11-19 16:01:40.677190] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 70389 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 70389 ']' 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 70389 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70389 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.256 killing process with pid 70389 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70389' 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 70389 00:06:34.256 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 70389 00:06:34.515 00:06:34.515 real 0m0.915s 00:06:34.515 user 0m1.091s 00:06:34.515 sys 0m0.222s 00:06:34.515 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.515 16:01:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:34.515 ************************************ 00:06:34.515 END TEST exit_on_failed_rpc_init 00:06:34.515 ************************************ 00:06:34.515 16:01:41 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:34.515 ************************************ 00:06:34.515 END TEST skip_rpc 00:06:34.515 ************************************ 00:06:34.515 00:06:34.515 real 0m12.784s 00:06:34.515 user 0m12.200s 00:06:34.515 sys 0m1.077s 00:06:34.515 16:01:41 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.515 16:01:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.515 16:01:41 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:34.515 16:01:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.515 16:01:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.515 16:01:41 -- common/autotest_common.sh@10 -- # set +x 00:06:34.515 ************************************ 00:06:34.515 START TEST rpc_client 00:06:34.515 ************************************ 00:06:34.515 16:01:41 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:34.515 * Looking for test storage... 00:06:34.515 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:34.515 16:01:41 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.515 16:01:41 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.515 16:01:41 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.775 16:01:41 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.775 16:01:41 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:34.775 16:01:41 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.775 16:01:41 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.775 --rc genhtml_branch_coverage=1 00:06:34.775 --rc genhtml_function_coverage=1 00:06:34.775 --rc genhtml_legend=1 00:06:34.775 --rc geninfo_all_blocks=1 00:06:34.775 --rc geninfo_unexecuted_blocks=1 00:06:34.775 00:06:34.775 ' 00:06:34.775 16:01:41 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.775 --rc genhtml_branch_coverage=1 00:06:34.775 --rc genhtml_function_coverage=1 00:06:34.775 --rc genhtml_legend=1 00:06:34.775 --rc geninfo_all_blocks=1 00:06:34.775 --rc geninfo_unexecuted_blocks=1 00:06:34.775 00:06:34.775 ' 00:06:34.775 16:01:41 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.775 --rc genhtml_branch_coverage=1 00:06:34.775 --rc genhtml_function_coverage=1 00:06:34.775 --rc genhtml_legend=1 00:06:34.775 --rc geninfo_all_blocks=1 00:06:34.775 --rc geninfo_unexecuted_blocks=1 00:06:34.775 00:06:34.775 ' 00:06:34.775 16:01:41 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.775 --rc genhtml_branch_coverage=1 00:06:34.775 --rc genhtml_function_coverage=1 00:06:34.775 --rc genhtml_legend=1 00:06:34.775 --rc geninfo_all_blocks=1 00:06:34.775 --rc geninfo_unexecuted_blocks=1 00:06:34.775 00:06:34.775 ' 00:06:34.775 16:01:41 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:34.775 OK 00:06:34.775 16:01:41 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:34.775 ************************************ 00:06:34.775 END TEST rpc_client 00:06:34.775 ************************************ 00:06:34.775 00:06:34.775 real 0m0.215s 00:06:34.775 user 0m0.133s 00:06:34.775 sys 0m0.085s 00:06:34.775 16:01:41 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.775 16:01:41 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:34.775 16:01:41 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:34.775 16:01:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.775 16:01:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.775 16:01:41 -- common/autotest_common.sh@10 -- # set +x 00:06:34.775 ************************************ 00:06:34.775 START TEST json_config 00:06:34.775 ************************************ 00:06:34.775 16:01:41 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:34.775 16:01:41 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.775 16:01:41 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.775 16:01:41 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.035 16:01:41 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.035 16:01:41 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.035 16:01:41 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.035 16:01:41 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.035 16:01:41 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.035 16:01:41 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.035 16:01:41 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.035 16:01:41 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.035 16:01:41 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.035 16:01:41 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.035 16:01:41 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.035 16:01:41 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.035 16:01:41 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:35.035 16:01:41 json_config -- scripts/common.sh@345 -- # : 1 00:06:35.035 16:01:41 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.035 16:01:41 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.035 16:01:41 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:35.035 16:01:41 json_config -- scripts/common.sh@353 -- # local d=1 00:06:35.035 16:01:41 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.035 16:01:41 json_config -- scripts/common.sh@355 -- # echo 1 00:06:35.035 16:01:41 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.035 16:01:41 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:35.035 16:01:41 json_config -- scripts/common.sh@353 -- # local d=2 00:06:35.035 16:01:41 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.035 16:01:41 json_config -- scripts/common.sh@355 -- # echo 2 00:06:35.035 16:01:41 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.035 16:01:41 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.035 16:01:41 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.035 16:01:41 json_config -- scripts/common.sh@368 -- # return 0 00:06:35.035 16:01:41 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.035 16:01:41 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.035 --rc genhtml_branch_coverage=1 00:06:35.035 --rc genhtml_function_coverage=1 00:06:35.035 --rc genhtml_legend=1 00:06:35.035 --rc geninfo_all_blocks=1 00:06:35.035 --rc geninfo_unexecuted_blocks=1 00:06:35.035 00:06:35.035 ' 00:06:35.036 16:01:41 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.036 --rc genhtml_branch_coverage=1 00:06:35.036 --rc genhtml_function_coverage=1 00:06:35.036 --rc genhtml_legend=1 00:06:35.036 --rc geninfo_all_blocks=1 00:06:35.036 --rc geninfo_unexecuted_blocks=1 00:06:35.036 00:06:35.036 ' 00:06:35.036 16:01:41 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.036 --rc genhtml_branch_coverage=1 00:06:35.036 --rc genhtml_function_coverage=1 00:06:35.036 --rc genhtml_legend=1 00:06:35.036 --rc geninfo_all_blocks=1 00:06:35.036 --rc geninfo_unexecuted_blocks=1 00:06:35.036 00:06:35.036 ' 00:06:35.036 16:01:41 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.036 --rc genhtml_branch_coverage=1 00:06:35.036 --rc genhtml_function_coverage=1 00:06:35.036 --rc genhtml_legend=1 00:06:35.036 --rc geninfo_all_blocks=1 00:06:35.036 --rc geninfo_unexecuted_blocks=1 00:06:35.036 00:06:35.036 ' 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:35.036 16:01:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:35.036 16:01:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:35.036 16:01:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:35.036 16:01:41 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:35.036 16:01:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.036 16:01:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.036 16:01:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.036 16:01:41 json_config -- paths/export.sh@5 -- # export PATH 00:06:35.036 16:01:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@51 -- # : 0 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:35.036 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:35.036 16:01:41 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:35.036 INFO: JSON configuration test init 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:35.036 16:01:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.036 16:01:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:35.036 16:01:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.036 16:01:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.036 Waiting for target to run... 00:06:35.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:35.036 16:01:41 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:35.036 16:01:41 json_config -- json_config/common.sh@9 -- # local app=target 00:06:35.036 16:01:41 json_config -- json_config/common.sh@10 -- # shift 00:06:35.036 16:01:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:35.036 16:01:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:35.036 16:01:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:35.036 16:01:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.036 16:01:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:35.036 16:01:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=70533 00:06:35.036 16:01:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:35.036 16:01:41 json_config -- json_config/common.sh@25 -- # waitforlisten 70533 /var/tmp/spdk_tgt.sock 00:06:35.036 16:01:41 json_config -- common/autotest_common.sh@835 -- # '[' -z 70533 ']' 00:06:35.036 16:01:41 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:35.036 16:01:41 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.036 16:01:41 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:35.037 16:01:41 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.037 16:01:41 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:35.037 16:01:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:35.037 [2024-11-19 16:01:41.624264] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:35.037 [2024-11-19 16:01:41.624363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70533 ] 00:06:35.296 [2024-11-19 16:01:41.943106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.296 [2024-11-19 16:01:41.959667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.233 00:06:36.233 16:01:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.233 16:01:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:36.233 16:01:42 json_config -- json_config/common.sh@26 -- # echo '' 00:06:36.233 16:01:42 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:36.233 16:01:42 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:36.233 16:01:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.233 16:01:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.233 16:01:42 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:36.233 16:01:42 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:36.233 16:01:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:36.233 16:01:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.233 16:01:42 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:36.233 16:01:42 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:36.233 16:01:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:36.491 [2024-11-19 16:01:43.017840] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.491 16:01:43 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:36.491 16:01:43 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:36.491 16:01:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:36.491 16:01:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:36.491 16:01:43 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:36.491 16:01:43 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:36.491 16:01:43 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:36.491 16:01:43 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:36.491 16:01:43 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:36.491 16:01:43 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:36.491 16:01:43 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:36.491 16:01:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@54 -- # sort 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:37.059 16:01:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.059 16:01:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:37.059 16:01:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.059 16:01:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:37.059 16:01:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:37.059 16:01:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:37.318 MallocForNvmf0 00:06:37.318 16:01:43 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:37.318 16:01:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:37.577 MallocForNvmf1 00:06:37.577 16:01:44 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:37.577 16:01:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:37.836 [2024-11-19 16:01:44.392279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.836 16:01:44 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:37.836 16:01:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:38.094 16:01:44 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:38.094 16:01:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:38.353 16:01:44 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:38.353 16:01:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:38.612 16:01:45 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:38.613 16:01:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:38.613 [2024-11-19 16:01:45.324758] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:38.872 16:01:45 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:38.872 16:01:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.872 16:01:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.872 16:01:45 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:38.872 16:01:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:38.872 16:01:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.872 16:01:45 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:38.872 16:01:45 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:38.872 16:01:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:39.130 MallocBdevForConfigChangeCheck 00:06:39.130 16:01:45 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:39.130 16:01:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.130 16:01:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.130 16:01:45 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:39.130 16:01:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:39.698 16:01:46 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:39.698 INFO: shutting down applications... 00:06:39.698 16:01:46 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:39.698 16:01:46 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:39.698 16:01:46 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:39.698 16:01:46 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:39.956 Calling clear_iscsi_subsystem 00:06:39.956 Calling clear_nvmf_subsystem 00:06:39.956 Calling clear_nbd_subsystem 00:06:39.956 Calling clear_ublk_subsystem 00:06:39.956 Calling clear_vhost_blk_subsystem 00:06:39.956 Calling clear_vhost_scsi_subsystem 00:06:39.956 Calling clear_bdev_subsystem 00:06:39.956 16:01:46 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:39.956 16:01:46 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:39.956 16:01:46 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:39.956 16:01:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:39.956 16:01:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:39.956 16:01:46 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:40.523 16:01:46 json_config -- json_config/json_config.sh@352 -- # break 00:06:40.523 16:01:46 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:40.523 16:01:46 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:40.523 16:01:46 json_config -- json_config/common.sh@31 -- # local app=target 00:06:40.523 16:01:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:40.523 16:01:46 json_config -- json_config/common.sh@35 -- # [[ -n 70533 ]] 00:06:40.524 16:01:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 70533 00:06:40.524 16:01:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:40.524 16:01:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:40.524 16:01:46 json_config -- json_config/common.sh@41 -- # kill -0 70533 00:06:40.524 16:01:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:40.783 16:01:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:40.783 16:01:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:40.783 16:01:47 json_config -- json_config/common.sh@41 -- # kill -0 70533 00:06:40.783 16:01:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:40.783 16:01:47 json_config -- json_config/common.sh@43 -- # break 00:06:40.783 16:01:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:40.783 16:01:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:40.783 SPDK target shutdown done 00:06:40.783 INFO: relaunching applications... 00:06:40.783 16:01:47 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:40.783 16:01:47 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:40.783 16:01:47 json_config -- json_config/common.sh@9 -- # local app=target 00:06:40.783 16:01:47 json_config -- json_config/common.sh@10 -- # shift 00:06:40.783 16:01:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:40.783 16:01:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:40.783 16:01:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:40.783 16:01:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.783 Waiting for target to run... 00:06:40.783 16:01:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:40.783 16:01:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=70733 00:06:40.783 16:01:47 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:40.783 16:01:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:40.783 16:01:47 json_config -- json_config/common.sh@25 -- # waitforlisten 70733 /var/tmp/spdk_tgt.sock 00:06:40.783 16:01:47 json_config -- common/autotest_common.sh@835 -- # '[' -z 70733 ']' 00:06:40.783 16:01:47 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:40.783 16:01:47 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:40.783 16:01:47 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:40.783 16:01:47 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.783 16:01:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.041 [2024-11-19 16:01:47.530034] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:41.041 [2024-11-19 16:01:47.530358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70733 ] 00:06:41.301 [2024-11-19 16:01:47.834144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.301 [2024-11-19 16:01:47.848399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.301 [2024-11-19 16:01:47.976030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:41.560 [2024-11-19 16:01:48.165723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.560 [2024-11-19 16:01:48.197753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:41.820 00:06:41.820 INFO: Checking if target configuration is the same... 00:06:41.820 16:01:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.820 16:01:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:41.820 16:01:48 json_config -- json_config/common.sh@26 -- # echo '' 00:06:41.820 16:01:48 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:41.820 16:01:48 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:41.820 16:01:48 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:41.820 16:01:48 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:41.820 16:01:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:41.820 + '[' 2 -ne 2 ']' 00:06:41.820 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:41.820 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:41.820 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:41.820 +++ basename /dev/fd/62 00:06:41.820 ++ mktemp /tmp/62.XXX 00:06:41.820 + tmp_file_1=/tmp/62.lLi 00:06:41.820 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:41.820 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:41.820 + tmp_file_2=/tmp/spdk_tgt_config.json.Piy 00:06:41.820 + ret=0 00:06:41.820 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:42.389 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:42.389 + diff -u /tmp/62.lLi /tmp/spdk_tgt_config.json.Piy 00:06:42.389 INFO: JSON config files are the same 00:06:42.389 + echo 'INFO: JSON config files are the same' 00:06:42.389 + rm /tmp/62.lLi /tmp/spdk_tgt_config.json.Piy 00:06:42.389 + exit 0 00:06:42.389 INFO: changing configuration and checking if this can be detected... 00:06:42.389 16:01:48 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:42.389 16:01:48 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:42.389 16:01:48 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:42.389 16:01:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:42.660 16:01:49 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:42.660 16:01:49 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:42.660 16:01:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:42.660 + '[' 2 -ne 2 ']' 00:06:42.660 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:42.660 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:42.660 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:42.660 +++ basename /dev/fd/62 00:06:42.660 ++ mktemp /tmp/62.XXX 00:06:42.660 + tmp_file_1=/tmp/62.Zbx 00:06:42.660 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:42.660 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:42.660 + tmp_file_2=/tmp/spdk_tgt_config.json.3Ss 00:06:42.660 + ret=0 00:06:42.660 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:42.925 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:43.184 + diff -u /tmp/62.Zbx /tmp/spdk_tgt_config.json.3Ss 00:06:43.184 + ret=1 00:06:43.184 + echo '=== Start of file: /tmp/62.Zbx ===' 00:06:43.184 + cat /tmp/62.Zbx 00:06:43.184 + echo '=== End of file: /tmp/62.Zbx ===' 00:06:43.184 + echo '' 00:06:43.184 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3Ss ===' 00:06:43.184 + cat /tmp/spdk_tgt_config.json.3Ss 00:06:43.184 + echo '=== End of file: /tmp/spdk_tgt_config.json.3Ss ===' 00:06:43.184 + echo '' 00:06:43.184 + rm /tmp/62.Zbx /tmp/spdk_tgt_config.json.3Ss 00:06:43.184 + exit 1 00:06:43.184 INFO: configuration change detected. 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:43.184 16:01:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.184 16:01:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@324 -- # [[ -n 70733 ]] 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:43.184 16:01:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.184 16:01:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:43.184 16:01:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.184 16:01:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.184 16:01:49 json_config -- json_config/json_config.sh@330 -- # killprocess 70733 00:06:43.184 16:01:49 json_config -- common/autotest_common.sh@954 -- # '[' -z 70733 ']' 00:06:43.184 16:01:49 json_config -- common/autotest_common.sh@958 -- # kill -0 70733 00:06:43.184 16:01:49 json_config -- common/autotest_common.sh@959 -- # uname 00:06:43.184 16:01:49 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.184 16:01:49 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70733 00:06:43.185 killing process with pid 70733 00:06:43.185 16:01:49 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.185 16:01:49 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.185 16:01:49 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70733' 00:06:43.185 16:01:49 json_config -- common/autotest_common.sh@973 -- # kill 70733 00:06:43.185 16:01:49 json_config -- common/autotest_common.sh@978 -- # wait 70733 00:06:43.444 16:01:49 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:43.444 16:01:49 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:43.444 16:01:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.444 16:01:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.444 INFO: Success 00:06:43.444 16:01:49 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:43.444 16:01:49 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:43.444 ************************************ 00:06:43.444 END TEST json_config 00:06:43.444 ************************************ 00:06:43.444 00:06:43.444 real 0m8.622s 00:06:43.444 user 0m12.546s 00:06:43.444 sys 0m1.497s 00:06:43.444 16:01:49 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.444 16:01:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.444 16:01:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:43.444 16:01:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.444 16:01:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.444 16:01:50 -- common/autotest_common.sh@10 -- # set +x 00:06:43.444 ************************************ 00:06:43.444 START TEST json_config_extra_key 00:06:43.444 ************************************ 00:06:43.444 16:01:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:43.444 16:01:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.444 16:01:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.444 16:01:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.444 16:01:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.444 16:01:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:43.444 16:01:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.444 16:01:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.444 --rc genhtml_branch_coverage=1 00:06:43.444 --rc genhtml_function_coverage=1 00:06:43.444 --rc genhtml_legend=1 00:06:43.444 --rc geninfo_all_blocks=1 00:06:43.444 --rc geninfo_unexecuted_blocks=1 00:06:43.444 00:06:43.444 ' 00:06:43.444 16:01:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.444 --rc genhtml_branch_coverage=1 00:06:43.444 --rc genhtml_function_coverage=1 00:06:43.444 --rc genhtml_legend=1 00:06:43.444 --rc geninfo_all_blocks=1 00:06:43.444 --rc geninfo_unexecuted_blocks=1 00:06:43.444 00:06:43.444 ' 00:06:43.444 16:01:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.444 --rc genhtml_branch_coverage=1 00:06:43.444 --rc genhtml_function_coverage=1 00:06:43.444 --rc genhtml_legend=1 00:06:43.444 --rc geninfo_all_blocks=1 00:06:43.444 --rc geninfo_unexecuted_blocks=1 00:06:43.445 00:06:43.445 ' 00:06:43.445 16:01:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.445 --rc genhtml_branch_coverage=1 00:06:43.445 --rc genhtml_function_coverage=1 00:06:43.445 --rc genhtml_legend=1 00:06:43.445 --rc geninfo_all_blocks=1 00:06:43.445 --rc geninfo_unexecuted_blocks=1 00:06:43.445 00:06:43.445 ' 00:06:43.445 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:43.445 16:01:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:43.445 16:01:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:43.704 16:01:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.705 16:01:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.705 16:01:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.705 16:01:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.705 16:01:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.705 16:01:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.705 16:01:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.705 16:01:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.705 16:01:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:43.705 16:01:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:43.705 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:43.705 16:01:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:43.705 INFO: launching applications... 00:06:43.705 16:01:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:43.705 16:01:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:43.705 16:01:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:43.705 16:01:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:43.705 Waiting for target to run... 00:06:43.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:43.705 16:01:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:43.705 16:01:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:43.705 16:01:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.705 16:01:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.705 16:01:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=70883 00:06:43.705 16:01:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:43.705 16:01:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 70883 /var/tmp/spdk_tgt.sock 00:06:43.705 16:01:50 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:43.705 16:01:50 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 70883 ']' 00:06:43.705 16:01:50 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:43.705 16:01:50 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.705 16:01:50 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:43.705 16:01:50 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.705 16:01:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:43.705 [2024-11-19 16:01:50.230461] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:43.705 [2024-11-19 16:01:50.230747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70883 ] 00:06:43.965 [2024-11-19 16:01:50.514917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.965 [2024-11-19 16:01:50.527097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.965 [2024-11-19 16:01:50.549776] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.904 16:01:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.904 16:01:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:44.904 16:01:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:44.904 00:06:44.904 16:01:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:44.904 INFO: shutting down applications... 00:06:44.904 16:01:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:44.904 16:01:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:44.904 16:01:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:44.904 16:01:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 70883 ]] 00:06:44.904 16:01:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 70883 00:06:44.904 16:01:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:44.904 16:01:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:44.904 16:01:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70883 00:06:44.904 16:01:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:45.164 16:01:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:45.164 16:01:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:45.164 16:01:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 70883 00:06:45.164 16:01:51 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:45.164 16:01:51 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:45.164 SPDK target shutdown done 00:06:45.164 Success 00:06:45.164 16:01:51 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:45.164 16:01:51 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:45.164 16:01:51 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:45.164 00:06:45.164 real 0m1.796s 00:06:45.164 user 0m1.696s 00:06:45.164 sys 0m0.320s 00:06:45.164 ************************************ 00:06:45.164 END TEST json_config_extra_key 00:06:45.164 ************************************ 00:06:45.164 16:01:51 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.164 16:01:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:45.164 16:01:51 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:45.164 16:01:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:45.164 16:01:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.164 16:01:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.164 ************************************ 00:06:45.164 START TEST alias_rpc 00:06:45.164 ************************************ 00:06:45.164 16:01:51 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:45.423 * Looking for test storage... 00:06:45.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:45.423 16:01:51 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:45.423 16:01:51 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:45.423 16:01:51 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:45.423 16:01:52 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:45.423 16:01:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:45.423 16:01:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:45.423 16:01:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:45.423 16:01:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:45.423 16:01:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:45.423 16:01:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:45.423 16:01:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:45.423 16:01:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:45.423 16:01:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:45.423 16:01:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:45.424 16:01:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:45.424 16:01:52 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:45.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.424 16:01:52 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:45.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.424 --rc genhtml_branch_coverage=1 00:06:45.424 --rc genhtml_function_coverage=1 00:06:45.424 --rc genhtml_legend=1 00:06:45.424 --rc geninfo_all_blocks=1 00:06:45.424 --rc geninfo_unexecuted_blocks=1 00:06:45.424 00:06:45.424 ' 00:06:45.424 16:01:52 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:45.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.424 --rc genhtml_branch_coverage=1 00:06:45.424 --rc genhtml_function_coverage=1 00:06:45.424 --rc genhtml_legend=1 00:06:45.424 --rc geninfo_all_blocks=1 00:06:45.424 --rc geninfo_unexecuted_blocks=1 00:06:45.424 00:06:45.424 ' 00:06:45.424 16:01:52 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:45.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.424 --rc genhtml_branch_coverage=1 00:06:45.424 --rc genhtml_function_coverage=1 00:06:45.424 --rc genhtml_legend=1 00:06:45.424 --rc geninfo_all_blocks=1 00:06:45.424 --rc geninfo_unexecuted_blocks=1 00:06:45.424 00:06:45.424 ' 00:06:45.424 16:01:52 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:45.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:45.424 --rc genhtml_branch_coverage=1 00:06:45.424 --rc genhtml_function_coverage=1 00:06:45.424 --rc genhtml_legend=1 00:06:45.424 --rc geninfo_all_blocks=1 00:06:45.424 --rc geninfo_unexecuted_blocks=1 00:06:45.424 00:06:45.424 ' 00:06:45.424 16:01:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:45.424 16:01:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=70961 00:06:45.424 16:01:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 70961 00:06:45.424 16:01:52 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 70961 ']' 00:06:45.424 16:01:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:45.424 16:01:52 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.424 16:01:52 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.424 16:01:52 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.424 16:01:52 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.424 16:01:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:45.424 [2024-11-19 16:01:52.115338] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:45.424 [2024-11-19 16:01:52.115686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70961 ] 00:06:45.683 [2024-11-19 16:01:52.257558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.683 [2024-11-19 16:01:52.276603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.683 [2024-11-19 16:01:52.311900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.622 16:01:53 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.622 16:01:53 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:46.622 16:01:53 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:46.881 16:01:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 70961 00:06:46.881 16:01:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 70961 ']' 00:06:46.881 16:01:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 70961 00:06:46.881 16:01:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:46.881 16:01:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.881 16:01:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70961 00:06:46.881 killing process with pid 70961 00:06:46.881 16:01:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.881 16:01:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.881 16:01:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70961' 00:06:46.881 16:01:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 70961 00:06:46.881 16:01:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 70961 00:06:47.141 ************************************ 00:06:47.141 END TEST alias_rpc 00:06:47.141 ************************************ 00:06:47.141 00:06:47.141 real 0m1.765s 00:06:47.141 user 0m2.111s 00:06:47.141 sys 0m0.345s 00:06:47.141 16:01:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.141 16:01:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.141 16:01:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:47.141 16:01:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:47.141 16:01:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.141 16:01:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.141 16:01:53 -- common/autotest_common.sh@10 -- # set +x 00:06:47.141 ************************************ 00:06:47.141 START TEST spdkcli_tcp 00:06:47.141 ************************************ 00:06:47.141 16:01:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:47.141 * Looking for test storage... 00:06:47.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:47.141 16:01:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:47.141 16:01:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:47.141 16:01:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:47.141 16:01:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.141 16:01:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.401 16:01:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:47.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.401 --rc genhtml_branch_coverage=1 00:06:47.401 --rc genhtml_function_coverage=1 00:06:47.401 --rc genhtml_legend=1 00:06:47.401 --rc geninfo_all_blocks=1 00:06:47.401 --rc geninfo_unexecuted_blocks=1 00:06:47.401 00:06:47.401 ' 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:47.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.401 --rc genhtml_branch_coverage=1 00:06:47.401 --rc genhtml_function_coverage=1 00:06:47.401 --rc genhtml_legend=1 00:06:47.401 --rc geninfo_all_blocks=1 00:06:47.401 --rc geninfo_unexecuted_blocks=1 00:06:47.401 00:06:47.401 ' 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:47.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.401 --rc genhtml_branch_coverage=1 00:06:47.401 --rc genhtml_function_coverage=1 00:06:47.401 --rc genhtml_legend=1 00:06:47.401 --rc geninfo_all_blocks=1 00:06:47.401 --rc geninfo_unexecuted_blocks=1 00:06:47.401 00:06:47.401 ' 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:47.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.401 --rc genhtml_branch_coverage=1 00:06:47.401 --rc genhtml_function_coverage=1 00:06:47.401 --rc genhtml_legend=1 00:06:47.401 --rc geninfo_all_blocks=1 00:06:47.401 --rc geninfo_unexecuted_blocks=1 00:06:47.401 00:06:47.401 ' 00:06:47.401 16:01:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:47.401 16:01:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:47.401 16:01:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:47.401 16:01:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:47.401 16:01:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:47.401 16:01:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:47.401 16:01:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.401 16:01:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=71039 00:06:47.401 16:01:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 71039 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 71039 ']' 00:06:47.401 16:01:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.401 16:01:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.401 [2024-11-19 16:01:53.927386] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:47.401 [2024-11-19 16:01:53.927477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71039 ] 00:06:47.401 [2024-11-19 16:01:54.069212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.401 [2024-11-19 16:01:54.089829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.401 [2024-11-19 16:01:54.089837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.661 [2024-11-19 16:01:54.126198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.661 16:01:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:47.661 16:01:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:47.661 16:01:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=71049 00:06:47.661 16:01:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:47.661 16:01:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:47.921 [ 00:06:47.921 "bdev_malloc_delete", 00:06:47.921 "bdev_malloc_create", 00:06:47.921 "bdev_null_resize", 00:06:47.921 "bdev_null_delete", 00:06:47.921 "bdev_null_create", 00:06:47.921 "bdev_nvme_cuse_unregister", 00:06:47.921 "bdev_nvme_cuse_register", 00:06:47.921 "bdev_opal_new_user", 00:06:47.921 "bdev_opal_set_lock_state", 00:06:47.921 "bdev_opal_delete", 00:06:47.921 "bdev_opal_get_info", 00:06:47.921 "bdev_opal_create", 00:06:47.921 "bdev_nvme_opal_revert", 00:06:47.921 "bdev_nvme_opal_init", 00:06:47.921 "bdev_nvme_send_cmd", 00:06:47.921 "bdev_nvme_set_keys", 00:06:47.921 "bdev_nvme_get_path_iostat", 00:06:47.921 "bdev_nvme_get_mdns_discovery_info", 00:06:47.921 "bdev_nvme_stop_mdns_discovery", 00:06:47.921 "bdev_nvme_start_mdns_discovery", 00:06:47.921 "bdev_nvme_set_multipath_policy", 00:06:47.921 "bdev_nvme_set_preferred_path", 00:06:47.921 "bdev_nvme_get_io_paths", 00:06:47.921 "bdev_nvme_remove_error_injection", 00:06:47.921 "bdev_nvme_add_error_injection", 00:06:47.921 "bdev_nvme_get_discovery_info", 00:06:47.921 "bdev_nvme_stop_discovery", 00:06:47.921 "bdev_nvme_start_discovery", 00:06:47.921 "bdev_nvme_get_controller_health_info", 00:06:47.921 "bdev_nvme_disable_controller", 00:06:47.921 "bdev_nvme_enable_controller", 00:06:47.921 "bdev_nvme_reset_controller", 00:06:47.921 "bdev_nvme_get_transport_statistics", 00:06:47.921 "bdev_nvme_apply_firmware", 00:06:47.921 "bdev_nvme_detach_controller", 00:06:47.921 "bdev_nvme_get_controllers", 00:06:47.921 "bdev_nvme_attach_controller", 00:06:47.921 "bdev_nvme_set_hotplug", 00:06:47.921 "bdev_nvme_set_options", 00:06:47.921 "bdev_passthru_delete", 00:06:47.921 "bdev_passthru_create", 00:06:47.921 "bdev_lvol_set_parent_bdev", 00:06:47.921 "bdev_lvol_set_parent", 00:06:47.921 "bdev_lvol_check_shallow_copy", 00:06:47.921 "bdev_lvol_start_shallow_copy", 00:06:47.921 "bdev_lvol_grow_lvstore", 00:06:47.921 "bdev_lvol_get_lvols", 00:06:47.921 "bdev_lvol_get_lvstores", 00:06:47.921 "bdev_lvol_delete", 00:06:47.921 "bdev_lvol_set_read_only", 00:06:47.921 "bdev_lvol_resize", 00:06:47.921 "bdev_lvol_decouple_parent", 00:06:47.921 "bdev_lvol_inflate", 00:06:47.921 "bdev_lvol_rename", 00:06:47.921 "bdev_lvol_clone_bdev", 00:06:47.921 "bdev_lvol_clone", 00:06:47.921 "bdev_lvol_snapshot", 00:06:47.921 "bdev_lvol_create", 00:06:47.921 "bdev_lvol_delete_lvstore", 00:06:47.921 "bdev_lvol_rename_lvstore", 00:06:47.921 "bdev_lvol_create_lvstore", 00:06:47.921 "bdev_raid_set_options", 00:06:47.921 "bdev_raid_remove_base_bdev", 00:06:47.921 "bdev_raid_add_base_bdev", 00:06:47.921 "bdev_raid_delete", 00:06:47.921 "bdev_raid_create", 00:06:47.921 "bdev_raid_get_bdevs", 00:06:47.921 "bdev_error_inject_error", 00:06:47.921 "bdev_error_delete", 00:06:47.921 "bdev_error_create", 00:06:47.921 "bdev_split_delete", 00:06:47.921 "bdev_split_create", 00:06:47.921 "bdev_delay_delete", 00:06:47.921 "bdev_delay_create", 00:06:47.921 "bdev_delay_update_latency", 00:06:47.921 "bdev_zone_block_delete", 00:06:47.921 "bdev_zone_block_create", 00:06:47.921 "blobfs_create", 00:06:47.921 "blobfs_detect", 00:06:47.921 "blobfs_set_cache_size", 00:06:47.921 "bdev_aio_delete", 00:06:47.921 "bdev_aio_rescan", 00:06:47.921 "bdev_aio_create", 00:06:47.921 "bdev_ftl_set_property", 00:06:47.921 "bdev_ftl_get_properties", 00:06:47.921 "bdev_ftl_get_stats", 00:06:47.921 "bdev_ftl_unmap", 00:06:47.921 "bdev_ftl_unload", 00:06:47.921 "bdev_ftl_delete", 00:06:47.921 "bdev_ftl_load", 00:06:47.921 "bdev_ftl_create", 00:06:47.921 "bdev_virtio_attach_controller", 00:06:47.921 "bdev_virtio_scsi_get_devices", 00:06:47.921 "bdev_virtio_detach_controller", 00:06:47.921 "bdev_virtio_blk_set_hotplug", 00:06:47.921 "bdev_iscsi_delete", 00:06:47.921 "bdev_iscsi_create", 00:06:47.921 "bdev_iscsi_set_options", 00:06:47.921 "bdev_uring_delete", 00:06:47.921 "bdev_uring_rescan", 00:06:47.921 "bdev_uring_create", 00:06:47.921 "accel_error_inject_error", 00:06:47.921 "ioat_scan_accel_module", 00:06:47.922 "dsa_scan_accel_module", 00:06:47.922 "iaa_scan_accel_module", 00:06:47.922 "vfu_virtio_create_fs_endpoint", 00:06:47.922 "vfu_virtio_create_scsi_endpoint", 00:06:47.922 "vfu_virtio_scsi_remove_target", 00:06:47.922 "vfu_virtio_scsi_add_target", 00:06:47.922 "vfu_virtio_create_blk_endpoint", 00:06:47.922 "vfu_virtio_delete_endpoint", 00:06:47.922 "keyring_file_remove_key", 00:06:47.922 "keyring_file_add_key", 00:06:47.922 "keyring_linux_set_options", 00:06:47.922 "fsdev_aio_delete", 00:06:47.922 "fsdev_aio_create", 00:06:47.922 "iscsi_get_histogram", 00:06:47.922 "iscsi_enable_histogram", 00:06:47.922 "iscsi_set_options", 00:06:47.922 "iscsi_get_auth_groups", 00:06:47.922 "iscsi_auth_group_remove_secret", 00:06:47.922 "iscsi_auth_group_add_secret", 00:06:47.922 "iscsi_delete_auth_group", 00:06:47.922 "iscsi_create_auth_group", 00:06:47.922 "iscsi_set_discovery_auth", 00:06:47.922 "iscsi_get_options", 00:06:47.922 "iscsi_target_node_request_logout", 00:06:47.922 "iscsi_target_node_set_redirect", 00:06:47.922 "iscsi_target_node_set_auth", 00:06:47.922 "iscsi_target_node_add_lun", 00:06:47.922 "iscsi_get_stats", 00:06:47.922 "iscsi_get_connections", 00:06:47.922 "iscsi_portal_group_set_auth", 00:06:47.922 "iscsi_start_portal_group", 00:06:47.922 "iscsi_delete_portal_group", 00:06:47.922 "iscsi_create_portal_group", 00:06:47.922 "iscsi_get_portal_groups", 00:06:47.922 "iscsi_delete_target_node", 00:06:47.922 "iscsi_target_node_remove_pg_ig_maps", 00:06:47.922 "iscsi_target_node_add_pg_ig_maps", 00:06:47.922 "iscsi_create_target_node", 00:06:47.922 "iscsi_get_target_nodes", 00:06:47.922 "iscsi_delete_initiator_group", 00:06:47.922 "iscsi_initiator_group_remove_initiators", 00:06:47.922 "iscsi_initiator_group_add_initiators", 00:06:47.922 "iscsi_create_initiator_group", 00:06:47.922 "iscsi_get_initiator_groups", 00:06:47.922 "nvmf_set_crdt", 00:06:47.922 "nvmf_set_config", 00:06:47.922 "nvmf_set_max_subsystems", 00:06:47.922 "nvmf_stop_mdns_prr", 00:06:47.922 "nvmf_publish_mdns_prr", 00:06:47.922 "nvmf_subsystem_get_listeners", 00:06:47.922 "nvmf_subsystem_get_qpairs", 00:06:47.922 "nvmf_subsystem_get_controllers", 00:06:47.922 "nvmf_get_stats", 00:06:47.922 "nvmf_get_transports", 00:06:47.922 "nvmf_create_transport", 00:06:47.922 "nvmf_get_targets", 00:06:47.922 "nvmf_delete_target", 00:06:47.922 "nvmf_create_target", 00:06:47.922 "nvmf_subsystem_allow_any_host", 00:06:47.922 "nvmf_subsystem_set_keys", 00:06:47.922 "nvmf_subsystem_remove_host", 00:06:47.922 "nvmf_subsystem_add_host", 00:06:47.922 "nvmf_ns_remove_host", 00:06:47.922 "nvmf_ns_add_host", 00:06:47.922 "nvmf_subsystem_remove_ns", 00:06:47.922 "nvmf_subsystem_set_ns_ana_group", 00:06:47.922 "nvmf_subsystem_add_ns", 00:06:47.922 "nvmf_subsystem_listener_set_ana_state", 00:06:47.922 "nvmf_discovery_get_referrals", 00:06:47.922 "nvmf_discovery_remove_referral", 00:06:47.922 "nvmf_discovery_add_referral", 00:06:47.922 "nvmf_subsystem_remove_listener", 00:06:47.922 "nvmf_subsystem_add_listener", 00:06:47.922 "nvmf_delete_subsystem", 00:06:47.922 "nvmf_create_subsystem", 00:06:47.922 "nvmf_get_subsystems", 00:06:47.922 "env_dpdk_get_mem_stats", 00:06:47.922 "nbd_get_disks", 00:06:47.922 "nbd_stop_disk", 00:06:47.922 "nbd_start_disk", 00:06:47.922 "ublk_recover_disk", 00:06:47.922 "ublk_get_disks", 00:06:47.922 "ublk_stop_disk", 00:06:47.922 "ublk_start_disk", 00:06:47.922 "ublk_destroy_target", 00:06:47.922 "ublk_create_target", 00:06:47.922 "virtio_blk_create_transport", 00:06:47.922 "virtio_blk_get_transports", 00:06:47.922 "vhost_controller_set_coalescing", 00:06:47.922 "vhost_get_controllers", 00:06:47.922 "vhost_delete_controller", 00:06:47.922 "vhost_create_blk_controller", 00:06:47.922 "vhost_scsi_controller_remove_target", 00:06:47.922 "vhost_scsi_controller_add_target", 00:06:47.922 "vhost_start_scsi_controller", 00:06:47.922 "vhost_create_scsi_controller", 00:06:47.922 "thread_set_cpumask", 00:06:47.922 "scheduler_set_options", 00:06:47.922 "framework_get_governor", 00:06:47.922 "framework_get_scheduler", 00:06:47.922 "framework_set_scheduler", 00:06:47.922 "framework_get_reactors", 00:06:47.922 "thread_get_io_channels", 00:06:47.922 "thread_get_pollers", 00:06:47.922 "thread_get_stats", 00:06:47.922 "framework_monitor_context_switch", 00:06:47.922 "spdk_kill_instance", 00:06:47.922 "log_enable_timestamps", 00:06:47.922 "log_get_flags", 00:06:47.922 "log_clear_flag", 00:06:47.922 "log_set_flag", 00:06:47.922 "log_get_level", 00:06:47.922 "log_set_level", 00:06:47.922 "log_get_print_level", 00:06:47.922 "log_set_print_level", 00:06:47.922 "framework_enable_cpumask_locks", 00:06:47.922 "framework_disable_cpumask_locks", 00:06:47.922 "framework_wait_init", 00:06:47.922 "framework_start_init", 00:06:47.922 "scsi_get_devices", 00:06:47.922 "bdev_get_histogram", 00:06:47.922 "bdev_enable_histogram", 00:06:47.922 "bdev_set_qos_limit", 00:06:47.922 "bdev_set_qd_sampling_period", 00:06:47.922 "bdev_get_bdevs", 00:06:47.922 "bdev_reset_iostat", 00:06:47.922 "bdev_get_iostat", 00:06:47.922 "bdev_examine", 00:06:47.922 "bdev_wait_for_examine", 00:06:47.922 "bdev_set_options", 00:06:47.922 "accel_get_stats", 00:06:47.922 "accel_set_options", 00:06:47.922 "accel_set_driver", 00:06:47.922 "accel_crypto_key_destroy", 00:06:47.922 "accel_crypto_keys_get", 00:06:47.922 "accel_crypto_key_create", 00:06:47.922 "accel_assign_opc", 00:06:47.922 "accel_get_module_info", 00:06:47.922 "accel_get_opc_assignments", 00:06:47.922 "vmd_rescan", 00:06:47.922 "vmd_remove_device", 00:06:47.922 "vmd_enable", 00:06:47.922 "sock_get_default_impl", 00:06:47.922 "sock_set_default_impl", 00:06:47.922 "sock_impl_set_options", 00:06:47.922 "sock_impl_get_options", 00:06:47.922 "iobuf_get_stats", 00:06:47.922 "iobuf_set_options", 00:06:47.922 "keyring_get_keys", 00:06:47.922 "vfu_tgt_set_base_path", 00:06:47.922 "framework_get_pci_devices", 00:06:47.922 "framework_get_config", 00:06:47.922 "framework_get_subsystems", 00:06:47.922 "fsdev_set_opts", 00:06:47.922 "fsdev_get_opts", 00:06:47.922 "trace_get_info", 00:06:47.922 "trace_get_tpoint_group_mask", 00:06:47.922 "trace_disable_tpoint_group", 00:06:47.922 "trace_enable_tpoint_group", 00:06:47.922 "trace_clear_tpoint_mask", 00:06:47.922 "trace_set_tpoint_mask", 00:06:47.922 "notify_get_notifications", 00:06:47.922 "notify_get_types", 00:06:47.922 "spdk_get_version", 00:06:47.922 "rpc_get_methods" 00:06:47.922 ] 00:06:47.922 16:01:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:47.922 16:01:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:47.922 16:01:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 71039 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 71039 ']' 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 71039 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71039 00:06:47.922 killing process with pid 71039 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71039' 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 71039 00:06:47.922 16:01:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 71039 00:06:48.182 ************************************ 00:06:48.182 END TEST spdkcli_tcp 00:06:48.182 ************************************ 00:06:48.182 00:06:48.182 real 0m1.098s 00:06:48.182 user 0m1.881s 00:06:48.182 sys 0m0.359s 00:06:48.182 16:01:54 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.182 16:01:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.182 16:01:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:48.182 16:01:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:48.182 16:01:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.182 16:01:54 -- common/autotest_common.sh@10 -- # set +x 00:06:48.182 ************************************ 00:06:48.182 START TEST dpdk_mem_utility 00:06:48.182 ************************************ 00:06:48.182 16:01:54 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:48.442 * Looking for test storage... 00:06:48.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:48.442 16:01:54 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.442 16:01:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.442 16:01:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.442 16:01:54 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.442 16:01:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.442 16:01:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.442 16:01:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.442 16:01:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:48.443 16:01:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:48.443 16:01:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.443 16:01:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:48.443 16:01:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.443 16:01:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:48.443 16:01:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:48.443 16:01:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.443 16:01:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:48.443 16:01:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.443 16:01:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.443 16:01:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.443 16:01:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:48.443 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.443 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.443 --rc genhtml_branch_coverage=1 00:06:48.443 --rc genhtml_function_coverage=1 00:06:48.443 --rc genhtml_legend=1 00:06:48.443 --rc geninfo_all_blocks=1 00:06:48.443 --rc geninfo_unexecuted_blocks=1 00:06:48.443 00:06:48.443 ' 00:06:48.443 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.443 --rc genhtml_branch_coverage=1 00:06:48.443 --rc genhtml_function_coverage=1 00:06:48.443 --rc genhtml_legend=1 00:06:48.443 --rc geninfo_all_blocks=1 00:06:48.443 --rc geninfo_unexecuted_blocks=1 00:06:48.443 00:06:48.443 ' 00:06:48.443 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.443 --rc genhtml_branch_coverage=1 00:06:48.443 --rc genhtml_function_coverage=1 00:06:48.443 --rc genhtml_legend=1 00:06:48.443 --rc geninfo_all_blocks=1 00:06:48.443 --rc geninfo_unexecuted_blocks=1 00:06:48.443 00:06:48.443 ' 00:06:48.443 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.443 --rc genhtml_branch_coverage=1 00:06:48.443 --rc genhtml_function_coverage=1 00:06:48.443 --rc genhtml_legend=1 00:06:48.443 --rc geninfo_all_blocks=1 00:06:48.443 --rc geninfo_unexecuted_blocks=1 00:06:48.443 00:06:48.443 ' 00:06:48.443 16:01:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:48.443 16:01:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=71131 00:06:48.443 16:01:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:48.443 16:01:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 71131 00:06:48.443 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 71131 ']' 00:06:48.443 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.443 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.443 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.443 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.443 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:48.443 [2024-11-19 16:01:55.083168] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:48.443 [2024-11-19 16:01:55.083744] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71131 ] 00:06:48.703 [2024-11-19 16:01:55.231814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.703 [2024-11-19 16:01:55.251529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.703 [2024-11-19 16:01:55.286569] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.703 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.703 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:48.703 16:01:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:48.703 16:01:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:48.703 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.703 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:48.703 { 00:06:48.703 "filename": "/tmp/spdk_mem_dump.txt" 00:06:48.703 } 00:06:48.703 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.703 16:01:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:48.965 DPDK memory size 810.000000 MiB in 1 heap(s) 00:06:48.965 1 heaps totaling size 810.000000 MiB 00:06:48.965 size: 810.000000 MiB heap id: 0 00:06:48.965 end heaps---------- 00:06:48.965 9 mempools totaling size 595.772034 MiB 00:06:48.965 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:48.965 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:48.965 size: 92.545471 MiB name: bdev_io_71131 00:06:48.965 size: 50.003479 MiB name: msgpool_71131 00:06:48.965 size: 36.509338 MiB name: fsdev_io_71131 00:06:48.965 size: 21.763794 MiB name: PDU_Pool 00:06:48.965 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:48.965 size: 4.133484 MiB name: evtpool_71131 00:06:48.965 size: 0.026123 MiB name: Session_Pool 00:06:48.965 end mempools------- 00:06:48.965 6 memzones totaling size 4.142822 MiB 00:06:48.965 size: 1.000366 MiB name: RG_ring_0_71131 00:06:48.965 size: 1.000366 MiB name: RG_ring_1_71131 00:06:48.965 size: 1.000366 MiB name: RG_ring_4_71131 00:06:48.965 size: 1.000366 MiB name: RG_ring_5_71131 00:06:48.965 size: 0.125366 MiB name: RG_ring_2_71131 00:06:48.965 size: 0.015991 MiB name: RG_ring_3_71131 00:06:48.965 end memzones------- 00:06:48.965 16:01:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:48.965 heap id: 0 total size: 810.000000 MiB number of busy elements: 318 number of free elements: 15 00:06:48.965 list of free elements. size: 10.812317 MiB 00:06:48.965 element at address: 0x200018a00000 with size: 0.999878 MiB 00:06:48.965 element at address: 0x200018c00000 with size: 0.999878 MiB 00:06:48.965 element at address: 0x200031800000 with size: 0.994446 MiB 00:06:48.965 element at address: 0x200000400000 with size: 0.993958 MiB 00:06:48.965 element at address: 0x200006400000 with size: 0.959839 MiB 00:06:48.965 element at address: 0x200012c00000 with size: 0.954285 MiB 00:06:48.965 element at address: 0x200018e00000 with size: 0.936584 MiB 00:06:48.965 element at address: 0x200000200000 with size: 0.717346 MiB 00:06:48.965 element at address: 0x20001a600000 with size: 0.566772 MiB 00:06:48.965 element at address: 0x20000a600000 with size: 0.488892 MiB 00:06:48.965 element at address: 0x200000c00000 with size: 0.487000 MiB 00:06:48.965 element at address: 0x200019000000 with size: 0.485657 MiB 00:06:48.965 element at address: 0x200003e00000 with size: 0.480286 MiB 00:06:48.965 element at address: 0x200027a00000 with size: 0.395752 MiB 00:06:48.965 element at address: 0x200000800000 with size: 0.351746 MiB 00:06:48.965 list of standard malloc elements. size: 199.268799 MiB 00:06:48.965 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:06:48.965 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:06:48.965 element at address: 0x200018afff80 with size: 1.000122 MiB 00:06:48.965 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:06:48.965 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:48.965 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:48.965 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:06:48.965 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:48.965 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:06:48.965 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:06:48.965 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000085e580 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087e840 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087e900 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087f080 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087f140 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087f200 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087f380 with size: 0.000183 MiB 00:06:48.965 element at address: 0x20000087f440 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000087f500 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000087f680 with size: 0.000183 MiB 00:06:48.966 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:06:48.966 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000cff000 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200003efb980 with size: 0.000183 MiB 00:06:48.966 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:06:48.966 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691180 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691240 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691300 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691480 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691540 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691600 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691780 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691840 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691900 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692080 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692140 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692200 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692380 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692440 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692500 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692680 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692740 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692800 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692980 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a693040 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a693100 with size: 0.000183 MiB 00:06:48.966 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693280 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693340 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693400 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693580 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693640 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693700 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693880 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693940 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694000 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694180 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694240 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694300 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694480 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694540 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694600 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694780 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694840 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694900 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a695080 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a695140 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a695200 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a695380 with size: 0.000183 MiB 00:06:48.967 element at address: 0x20001a695440 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a65500 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:06:48.967 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:06:48.968 list of memzone associated elements. size: 599.918884 MiB 00:06:48.968 element at address: 0x20001a695500 with size: 211.416748 MiB 00:06:48.968 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:48.968 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:06:48.968 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:48.968 element at address: 0x200012df4780 with size: 92.045044 MiB 00:06:48.968 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_71131_0 00:06:48.968 element at address: 0x200000dff380 with size: 48.003052 MiB 00:06:48.968 associated memzone info: size: 48.002930 MiB name: MP_msgpool_71131_0 00:06:48.968 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:06:48.968 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_71131_0 00:06:48.968 element at address: 0x2000191be940 with size: 20.255554 MiB 00:06:48.968 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:48.968 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:06:48.968 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:48.968 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:06:48.968 associated memzone info: size: 3.000122 MiB name: MP_evtpool_71131_0 00:06:48.968 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:06:48.968 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_71131 00:06:48.968 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:48.968 associated memzone info: size: 1.007996 MiB name: MP_evtpool_71131 00:06:48.968 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:06:48.968 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:48.968 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:06:48.968 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:48.968 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:06:48.968 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:48.968 element at address: 0x200003efba40 with size: 1.008118 MiB 00:06:48.968 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:48.968 element at address: 0x200000cff180 with size: 1.000488 MiB 00:06:48.968 associated memzone info: size: 1.000366 MiB name: RG_ring_0_71131 00:06:48.968 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:06:48.968 associated memzone info: size: 1.000366 MiB name: RG_ring_1_71131 00:06:48.968 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:06:48.968 associated memzone info: size: 1.000366 MiB name: RG_ring_4_71131 00:06:48.968 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:06:48.968 associated memzone info: size: 1.000366 MiB name: RG_ring_5_71131 00:06:48.968 element at address: 0x20000087f740 with size: 0.500488 MiB 00:06:48.968 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_71131 00:06:48.968 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:06:48.968 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_71131 00:06:48.968 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:06:48.968 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:48.968 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:06:48.968 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:48.968 element at address: 0x20001907c540 with size: 0.250488 MiB 00:06:48.968 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:48.968 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:06:48.968 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_71131 00:06:48.968 element at address: 0x20000085e640 with size: 0.125488 MiB 00:06:48.968 associated memzone info: size: 0.125366 MiB name: RG_ring_2_71131 00:06:48.968 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:06:48.968 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:48.968 element at address: 0x200027a65680 with size: 0.023743 MiB 00:06:48.968 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:48.968 element at address: 0x20000085a380 with size: 0.016113 MiB 00:06:48.968 associated memzone info: size: 0.015991 MiB name: RG_ring_3_71131 00:06:48.968 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:06:48.968 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:48.968 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:06:48.968 associated memzone info: size: 0.000183 MiB name: MP_msgpool_71131 00:06:48.968 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:06:48.968 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_71131 00:06:48.968 element at address: 0x20000085a180 with size: 0.000305 MiB 00:06:48.968 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_71131 00:06:48.968 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:06:48.968 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:48.968 16:01:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:48.968 16:01:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 71131 00:06:48.968 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 71131 ']' 00:06:48.968 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 71131 00:06:48.968 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:48.968 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.968 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71131 00:06:48.968 killing process with pid 71131 00:06:48.968 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.968 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.968 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71131' 00:06:48.968 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 71131 00:06:48.968 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 71131 00:06:49.227 ************************************ 00:06:49.228 END TEST dpdk_mem_utility 00:06:49.228 ************************************ 00:06:49.228 00:06:49.228 real 0m0.980s 00:06:49.228 user 0m1.069s 00:06:49.228 sys 0m0.287s 00:06:49.228 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.228 16:01:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:49.228 16:01:55 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:49.228 16:01:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:49.228 16:01:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.228 16:01:55 -- common/autotest_common.sh@10 -- # set +x 00:06:49.228 ************************************ 00:06:49.228 START TEST event 00:06:49.228 ************************************ 00:06:49.228 16:01:55 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:49.228 * Looking for test storage... 00:06:49.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:49.228 16:01:55 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:49.487 16:01:55 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:49.487 16:01:55 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:49.487 16:01:56 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:49.487 16:01:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.487 16:01:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.487 16:01:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.488 16:01:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.488 16:01:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.488 16:01:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.488 16:01:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.488 16:01:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.488 16:01:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.488 16:01:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.488 16:01:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.488 16:01:56 event -- scripts/common.sh@344 -- # case "$op" in 00:06:49.488 16:01:56 event -- scripts/common.sh@345 -- # : 1 00:06:49.488 16:01:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.488 16:01:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.488 16:01:56 event -- scripts/common.sh@365 -- # decimal 1 00:06:49.488 16:01:56 event -- scripts/common.sh@353 -- # local d=1 00:06:49.488 16:01:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.488 16:01:56 event -- scripts/common.sh@355 -- # echo 1 00:06:49.488 16:01:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.488 16:01:56 event -- scripts/common.sh@366 -- # decimal 2 00:06:49.488 16:01:56 event -- scripts/common.sh@353 -- # local d=2 00:06:49.488 16:01:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.488 16:01:56 event -- scripts/common.sh@355 -- # echo 2 00:06:49.488 16:01:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.488 16:01:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.488 16:01:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.488 16:01:56 event -- scripts/common.sh@368 -- # return 0 00:06:49.488 16:01:56 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.488 16:01:56 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:49.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.488 --rc genhtml_branch_coverage=1 00:06:49.488 --rc genhtml_function_coverage=1 00:06:49.488 --rc genhtml_legend=1 00:06:49.488 --rc geninfo_all_blocks=1 00:06:49.488 --rc geninfo_unexecuted_blocks=1 00:06:49.488 00:06:49.488 ' 00:06:49.488 16:01:56 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:49.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.488 --rc genhtml_branch_coverage=1 00:06:49.488 --rc genhtml_function_coverage=1 00:06:49.488 --rc genhtml_legend=1 00:06:49.488 --rc geninfo_all_blocks=1 00:06:49.488 --rc geninfo_unexecuted_blocks=1 00:06:49.488 00:06:49.488 ' 00:06:49.488 16:01:56 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:49.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.488 --rc genhtml_branch_coverage=1 00:06:49.488 --rc genhtml_function_coverage=1 00:06:49.488 --rc genhtml_legend=1 00:06:49.488 --rc geninfo_all_blocks=1 00:06:49.488 --rc geninfo_unexecuted_blocks=1 00:06:49.488 00:06:49.488 ' 00:06:49.488 16:01:56 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:49.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.488 --rc genhtml_branch_coverage=1 00:06:49.488 --rc genhtml_function_coverage=1 00:06:49.488 --rc genhtml_legend=1 00:06:49.488 --rc geninfo_all_blocks=1 00:06:49.488 --rc geninfo_unexecuted_blocks=1 00:06:49.488 00:06:49.488 ' 00:06:49.488 16:01:56 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:49.488 16:01:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:49.488 16:01:56 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:49.488 16:01:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:49.488 16:01:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.488 16:01:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:49.488 ************************************ 00:06:49.488 START TEST event_perf 00:06:49.488 ************************************ 00:06:49.488 16:01:56 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:49.488 Running I/O for 1 seconds...[2024-11-19 16:01:56.084448] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:49.488 [2024-11-19 16:01:56.084703] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71203 ] 00:06:49.747 [2024-11-19 16:01:56.231050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:49.747 [2024-11-19 16:01:56.252062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.747 [2024-11-19 16:01:56.252193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.747 [2024-11-19 16:01:56.252346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.747 Running I/O for 1 seconds...[2024-11-19 16:01:56.252346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.685 00:06:50.685 lcore 0: 204642 00:06:50.685 lcore 1: 204643 00:06:50.685 lcore 2: 204642 00:06:50.685 lcore 3: 204641 00:06:50.685 done. 00:06:50.685 00:06:50.685 real 0m1.221s 00:06:50.685 ************************************ 00:06:50.685 END TEST event_perf 00:06:50.685 ************************************ 00:06:50.685 user 0m4.060s 00:06:50.685 sys 0m0.039s 00:06:50.685 16:01:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.685 16:01:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.685 16:01:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:50.685 16:01:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:50.685 16:01:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.685 16:01:57 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.685 ************************************ 00:06:50.685 START TEST event_reactor 00:06:50.685 ************************************ 00:06:50.686 16:01:57 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:50.686 [2024-11-19 16:01:57.354204] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:50.686 [2024-11-19 16:01:57.354310] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71236 ] 00:06:50.945 [2024-11-19 16:01:57.499332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.945 [2024-11-19 16:01:57.517475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.900 test_start 00:06:51.900 oneshot 00:06:51.900 tick 100 00:06:51.900 tick 100 00:06:51.900 tick 250 00:06:51.900 tick 100 00:06:51.900 tick 100 00:06:51.900 tick 250 00:06:51.900 tick 100 00:06:51.900 tick 500 00:06:51.900 tick 100 00:06:51.900 tick 100 00:06:51.900 tick 250 00:06:51.900 tick 100 00:06:51.900 tick 100 00:06:51.900 test_end 00:06:51.900 ************************************ 00:06:51.900 END TEST event_reactor 00:06:51.900 ************************************ 00:06:51.900 00:06:51.900 real 0m1.215s 00:06:51.900 user 0m1.073s 00:06:51.900 sys 0m0.036s 00:06:51.900 16:01:58 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.900 16:01:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:51.900 16:01:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:51.900 16:01:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:51.900 16:01:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.900 16:01:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.170 ************************************ 00:06:52.170 START TEST event_reactor_perf 00:06:52.170 ************************************ 00:06:52.170 16:01:58 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:52.170 [2024-11-19 16:01:58.626328] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:52.170 [2024-11-19 16:01:58.626602] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71266 ] 00:06:52.170 [2024-11-19 16:01:58.771224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.170 [2024-11-19 16:01:58.789257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.105 test_start 00:06:53.105 test_end 00:06:53.105 Performance: 439450 events per second 00:06:53.364 00:06:53.364 real 0m1.212s 00:06:53.364 user 0m1.070s 00:06:53.364 sys 0m0.036s 00:06:53.364 16:01:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.364 ************************************ 00:06:53.364 END TEST event_reactor_perf 00:06:53.364 ************************************ 00:06:53.364 16:01:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.364 16:01:59 event -- event/event.sh@49 -- # uname -s 00:06:53.364 16:01:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:53.364 16:01:59 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:53.364 16:01:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:53.364 16:01:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.364 16:01:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.364 ************************************ 00:06:53.364 START TEST event_scheduler 00:06:53.364 ************************************ 00:06:53.364 16:01:59 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:53.364 * Looking for test storage... 00:06:53.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:53.364 16:01:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:53.364 16:01:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:53.364 16:01:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:53.364 16:02:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:53.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.364 16:02:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:53.364 16:02:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.364 16:02:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.364 --rc genhtml_branch_coverage=1 00:06:53.364 --rc genhtml_function_coverage=1 00:06:53.364 --rc genhtml_legend=1 00:06:53.364 --rc geninfo_all_blocks=1 00:06:53.364 --rc geninfo_unexecuted_blocks=1 00:06:53.364 00:06:53.364 ' 00:06:53.364 16:02:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.364 --rc genhtml_branch_coverage=1 00:06:53.364 --rc genhtml_function_coverage=1 00:06:53.364 --rc genhtml_legend=1 00:06:53.364 --rc geninfo_all_blocks=1 00:06:53.365 --rc geninfo_unexecuted_blocks=1 00:06:53.365 00:06:53.365 ' 00:06:53.365 16:02:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.365 --rc genhtml_branch_coverage=1 00:06:53.365 --rc genhtml_function_coverage=1 00:06:53.365 --rc genhtml_legend=1 00:06:53.365 --rc geninfo_all_blocks=1 00:06:53.365 --rc geninfo_unexecuted_blocks=1 00:06:53.365 00:06:53.365 ' 00:06:53.365 16:02:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.365 --rc genhtml_branch_coverage=1 00:06:53.365 --rc genhtml_function_coverage=1 00:06:53.365 --rc genhtml_legend=1 00:06:53.365 --rc geninfo_all_blocks=1 00:06:53.365 --rc geninfo_unexecuted_blocks=1 00:06:53.365 00:06:53.365 ' 00:06:53.365 16:02:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:53.365 16:02:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=71335 00:06:53.365 16:02:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:53.365 16:02:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 71335 00:06:53.365 16:02:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:53.365 16:02:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 71335 ']' 00:06:53.365 16:02:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.365 16:02:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.365 16:02:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.365 16:02:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.365 16:02:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:53.624 [2024-11-19 16:02:00.123585] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:53.624 [2024-11-19 16:02:00.123904] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71335 ] 00:06:53.624 [2024-11-19 16:02:00.276852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.624 [2024-11-19 16:02:00.306735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.624 [2024-11-19 16:02:00.307047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.624 [2024-11-19 16:02:00.306905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.624 [2024-11-19 16:02:00.307677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.560 16:02:01 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.560 16:02:01 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:54.560 16:02:01 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:54.560 16:02:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.560 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.560 POWER: Cannot set governor of lcore 0 to performance 00:06:54.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.560 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.560 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:54.560 POWER: Cannot set governor of lcore 0 to userspace 00:06:54.560 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:54.560 POWER: Unable to set Power Management Environment for lcore 0 00:06:54.560 [2024-11-19 16:02:01.088206] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:54.560 [2024-11-19 16:02:01.088227] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:54.560 [2024-11-19 16:02:01.088270] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:54.560 [2024-11-19 16:02:01.088286] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:54.560 [2024-11-19 16:02:01.088295] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:54.560 [2024-11-19 16:02:01.088302] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:54.560 16:02:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:54.560 16:02:01 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 [2024-11-19 16:02:01.121389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:54.560 [2024-11-19 16:02:01.139813] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:54.560 16:02:01 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:54.560 16:02:01 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:54.560 16:02:01 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 ************************************ 00:06:54.560 START TEST scheduler_create_thread 00:06:54.560 ************************************ 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 2 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 3 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 4 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 5 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 6 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 7 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 8 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 9 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 10 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.560 16:02:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.466 16:02:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.466 16:02:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:56.466 16:02:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:56.466 16:02:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.466 16:02:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.402 ************************************ 00:06:57.402 END TEST scheduler_create_thread 00:06:57.402 ************************************ 00:06:57.402 16:02:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.402 00:06:57.402 real 0m2.612s 00:06:57.402 user 0m0.014s 00:06:57.402 sys 0m0.009s 00:06:57.402 16:02:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.402 16:02:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.402 16:02:03 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:57.402 16:02:03 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 71335 00:06:57.402 16:02:03 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 71335 ']' 00:06:57.402 16:02:03 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 71335 00:06:57.402 16:02:03 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:57.402 16:02:03 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.402 16:02:03 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71335 00:06:57.402 killing process with pid 71335 00:06:57.402 16:02:03 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:57.402 16:02:03 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:57.402 16:02:03 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71335' 00:06:57.402 16:02:03 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 71335 00:06:57.402 16:02:03 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 71335 00:06:57.662 [2024-11-19 16:02:04.242811] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:57.921 00:06:57.921 real 0m4.494s 00:06:57.921 user 0m8.618s 00:06:57.921 sys 0m0.353s 00:06:57.921 16:02:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.921 16:02:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:57.921 ************************************ 00:06:57.921 END TEST event_scheduler 00:06:57.921 ************************************ 00:06:57.921 16:02:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:57.921 16:02:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:57.921 16:02:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.921 16:02:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.921 16:02:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:57.921 ************************************ 00:06:57.921 START TEST app_repeat 00:06:57.921 ************************************ 00:06:57.921 16:02:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:57.921 16:02:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.921 16:02:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.921 16:02:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:57.921 16:02:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:57.921 16:02:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:57.921 16:02:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:57.921 16:02:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:57.921 16:02:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=71437 00:06:57.921 16:02:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:57.922 16:02:04 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:57.922 Process app_repeat pid: 71437 00:06:57.922 16:02:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 71437' 00:06:57.922 spdk_app_start Round 0 00:06:57.922 16:02:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:57.922 16:02:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:57.922 16:02:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71437 /var/tmp/spdk-nbd.sock 00:06:57.922 16:02:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71437 ']' 00:06:57.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:57.922 16:02:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:57.922 16:02:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.922 16:02:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:57.922 16:02:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.922 16:02:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:57.922 [2024-11-19 16:02:04.460863] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:06:57.922 [2024-11-19 16:02:04.460964] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71437 ] 00:06:57.922 [2024-11-19 16:02:04.606526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.922 [2024-11-19 16:02:04.629579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.922 [2024-11-19 16:02:04.629602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.181 [2024-11-19 16:02:04.658706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:58.181 16:02:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.181 16:02:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:58.181 16:02:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.441 Malloc0 00:06:58.441 16:02:04 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.701 Malloc1 00:06:58.701 16:02:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.701 16:02:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.960 /dev/nbd0 00:06:58.960 16:02:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.960 16:02:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.960 1+0 records in 00:06:58.960 1+0 records out 00:06:58.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030908 s, 13.3 MB/s 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:58.960 16:02:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:58.960 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.960 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.960 16:02:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:59.272 /dev/nbd1 00:06:59.272 16:02:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:59.272 16:02:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:59.272 1+0 records in 00:06:59.272 1+0 records out 00:06:59.272 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178807 s, 22.9 MB/s 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:59.272 16:02:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:59.272 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:59.272 16:02:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:59.272 16:02:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.272 16:02:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.272 16:02:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.532 { 00:06:59.532 "nbd_device": "/dev/nbd0", 00:06:59.532 "bdev_name": "Malloc0" 00:06:59.532 }, 00:06:59.532 { 00:06:59.532 "nbd_device": "/dev/nbd1", 00:06:59.532 "bdev_name": "Malloc1" 00:06:59.532 } 00:06:59.532 ]' 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.532 { 00:06:59.532 "nbd_device": "/dev/nbd0", 00:06:59.532 "bdev_name": "Malloc0" 00:06:59.532 }, 00:06:59.532 { 00:06:59.532 "nbd_device": "/dev/nbd1", 00:06:59.532 "bdev_name": "Malloc1" 00:06:59.532 } 00:06:59.532 ]' 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.532 /dev/nbd1' 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.532 /dev/nbd1' 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.532 256+0 records in 00:06:59.532 256+0 records out 00:06:59.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00981532 s, 107 MB/s 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.532 256+0 records in 00:06:59.532 256+0 records out 00:06:59.532 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226302 s, 46.3 MB/s 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.532 16:02:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.792 256+0 records in 00:06:59.792 256+0 records out 00:06:59.792 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303185 s, 34.6 MB/s 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.792 16:02:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:00.051 16:02:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:00.051 16:02:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:00.051 16:02:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:00.051 16:02:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.051 16:02:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.051 16:02:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:00.051 16:02:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.051 16:02:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.051 16:02:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:00.051 16:02:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:00.311 16:02:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:00.311 16:02:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:00.311 16:02:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:00.311 16:02:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:00.311 16:02:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:00.311 16:02:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:00.311 16:02:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:00.311 16:02:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:00.311 16:02:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:00.311 16:02:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.311 16:02:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:00.570 16:02:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:00.570 16:02:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.829 16:02:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:00.829 [2024-11-19 16:02:07.462932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.829 [2024-11-19 16:02:07.481411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.829 [2024-11-19 16:02:07.481420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.829 [2024-11-19 16:02:07.509450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.829 [2024-11-19 16:02:07.509544] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:00.829 [2024-11-19 16:02:07.509557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:04.117 spdk_app_start Round 1 00:07:04.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:04.117 16:02:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:04.117 16:02:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:04.117 16:02:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71437 /var/tmp/spdk-nbd.sock 00:07:04.117 16:02:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71437 ']' 00:07:04.117 16:02:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:04.117 16:02:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.117 16:02:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:04.117 16:02:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.117 16:02:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:04.117 16:02:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.117 16:02:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:04.117 16:02:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.376 Malloc0 00:07:04.376 16:02:10 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:04.636 Malloc1 00:07:04.636 16:02:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.636 16:02:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:04.896 /dev/nbd0 00:07:04.896 16:02:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.896 16:02:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.896 1+0 records in 00:07:04.896 1+0 records out 00:07:04.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221562 s, 18.5 MB/s 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.896 16:02:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:04.896 16:02:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.896 16:02:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.896 16:02:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:05.155 /dev/nbd1 00:07:05.155 16:02:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:05.155 16:02:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:05.155 1+0 records in 00:07:05.155 1+0 records out 00:07:05.155 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210729 s, 19.4 MB/s 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.155 16:02:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:05.155 16:02:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:05.155 16:02:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:05.155 16:02:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:05.155 16:02:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.155 16:02:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.415 16:02:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:05.415 { 00:07:05.415 "nbd_device": "/dev/nbd0", 00:07:05.415 "bdev_name": "Malloc0" 00:07:05.415 }, 00:07:05.415 { 00:07:05.415 "nbd_device": "/dev/nbd1", 00:07:05.415 "bdev_name": "Malloc1" 00:07:05.415 } 00:07:05.415 ]' 00:07:05.415 16:02:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:05.415 { 00:07:05.415 "nbd_device": "/dev/nbd0", 00:07:05.415 "bdev_name": "Malloc0" 00:07:05.415 }, 00:07:05.415 { 00:07:05.415 "nbd_device": "/dev/nbd1", 00:07:05.415 "bdev_name": "Malloc1" 00:07:05.415 } 00:07:05.415 ]' 00:07:05.415 16:02:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:05.674 /dev/nbd1' 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:05.674 /dev/nbd1' 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:05.674 256+0 records in 00:07:05.674 256+0 records out 00:07:05.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00757257 s, 138 MB/s 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:05.674 256+0 records in 00:07:05.674 256+0 records out 00:07:05.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259721 s, 40.4 MB/s 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:05.674 256+0 records in 00:07:05.674 256+0 records out 00:07:05.674 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027428 s, 38.2 MB/s 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.674 16:02:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:05.933 16:02:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:05.933 16:02:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:05.933 16:02:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:05.933 16:02:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:05.933 16:02:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:05.933 16:02:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:05.933 16:02:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:05.933 16:02:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:05.933 16:02:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.933 16:02:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.192 16:02:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.192 16:02:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.193 16:02:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.193 16:02:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.193 16:02:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.193 16:02:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.193 16:02:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:06.193 16:02:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.193 16:02:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:06.193 16:02:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.193 16:02:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.452 16:02:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:06.452 16:02:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:06.452 16:02:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:06.711 16:02:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:06.711 16:02:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:06.711 16:02:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:06.711 16:02:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:06.711 16:02:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:06.711 16:02:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:06.711 16:02:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:06.711 16:02:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:06.711 16:02:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:06.711 16:02:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:06.970 16:02:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:06.970 [2024-11-19 16:02:13.610676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:06.970 [2024-11-19 16:02:13.628860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.970 [2024-11-19 16:02:13.628866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.970 [2024-11-19 16:02:13.657669] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:06.970 [2024-11-19 16:02:13.657762] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:06.970 [2024-11-19 16:02:13.657775] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:10.260 16:02:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.260 spdk_app_start Round 2 00:07:10.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.260 16:02:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:10.260 16:02:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 71437 /var/tmp/spdk-nbd.sock 00:07:10.260 16:02:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71437 ']' 00:07:10.260 16:02:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.260 16:02:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.260 16:02:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.260 16:02:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.260 16:02:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.260 16:02:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.261 16:02:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:10.261 16:02:16 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.519 Malloc0 00:07:10.519 16:02:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.779 Malloc1 00:07:10.779 16:02:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:10.779 16:02:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.038 /dev/nbd0 00:07:11.038 16:02:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.038 16:02:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.038 1+0 records in 00:07:11.038 1+0 records out 00:07:11.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273446 s, 15.0 MB/s 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.038 16:02:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:11.038 16:02:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.038 16:02:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.038 16:02:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:11.297 /dev/nbd1 00:07:11.297 16:02:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.297 16:02:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.297 16:02:17 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:11.297 16:02:17 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:11.297 16:02:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.297 16:02:17 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.297 16:02:17 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:11.297 16:02:17 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:11.297 16:02:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.298 16:02:17 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.298 16:02:17 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.298 1+0 records in 00:07:11.298 1+0 records out 00:07:11.298 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031624 s, 13.0 MB/s 00:07:11.298 16:02:17 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.298 16:02:17 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:11.298 16:02:17 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:11.298 16:02:17 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.298 16:02:17 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:11.298 16:02:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.298 16:02:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.298 16:02:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.298 16:02:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.298 16:02:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:11.556 { 00:07:11.556 "nbd_device": "/dev/nbd0", 00:07:11.556 "bdev_name": "Malloc0" 00:07:11.556 }, 00:07:11.556 { 00:07:11.556 "nbd_device": "/dev/nbd1", 00:07:11.556 "bdev_name": "Malloc1" 00:07:11.556 } 00:07:11.556 ]' 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:11.556 { 00:07:11.556 "nbd_device": "/dev/nbd0", 00:07:11.556 "bdev_name": "Malloc0" 00:07:11.556 }, 00:07:11.556 { 00:07:11.556 "nbd_device": "/dev/nbd1", 00:07:11.556 "bdev_name": "Malloc1" 00:07:11.556 } 00:07:11.556 ]' 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:11.556 /dev/nbd1' 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:11.556 /dev/nbd1' 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:11.556 256+0 records in 00:07:11.556 256+0 records out 00:07:11.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00997057 s, 105 MB/s 00:07:11.556 16:02:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.557 16:02:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:11.817 256+0 records in 00:07:11.817 256+0 records out 00:07:11.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238255 s, 44.0 MB/s 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:11.817 256+0 records in 00:07:11.817 256+0 records out 00:07:11.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268252 s, 39.1 MB/s 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.817 16:02:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.076 16:02:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.076 16:02:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.076 16:02:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.076 16:02:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.076 16:02:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.076 16:02:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.076 16:02:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.076 16:02:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.076 16:02:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.076 16:02:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.335 16:02:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.335 16:02:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.335 16:02:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.335 16:02:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.335 16:02:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.335 16:02:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.335 16:02:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.335 16:02:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.335 16:02:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.335 16:02:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.335 16:02:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.608 16:02:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.608 16:02:19 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:12.874 16:02:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:12.874 [2024-11-19 16:02:19.527155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.874 [2024-11-19 16:02:19.549828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.874 [2024-11-19 16:02:19.549838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.874 [2024-11-19 16:02:19.579222] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:12.874 [2024-11-19 16:02:19.579342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:12.874 [2024-11-19 16:02:19.579356] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.163 16:02:22 event.app_repeat -- event/event.sh@38 -- # waitforlisten 71437 /var/tmp/spdk-nbd.sock 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 71437 ']' 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:16.163 16:02:22 event.app_repeat -- event/event.sh@39 -- # killprocess 71437 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 71437 ']' 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 71437 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71437 00:07:16.163 killing process with pid 71437 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71437' 00:07:16.163 16:02:22 event.app_repeat -- common/autotest_common.sh@973 -- # kill 71437 00:07:16.164 16:02:22 event.app_repeat -- common/autotest_common.sh@978 -- # wait 71437 00:07:16.164 spdk_app_start is called in Round 0. 00:07:16.164 Shutdown signal received, stop current app iteration 00:07:16.164 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 reinitialization... 00:07:16.164 spdk_app_start is called in Round 1. 00:07:16.164 Shutdown signal received, stop current app iteration 00:07:16.164 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 reinitialization... 00:07:16.164 spdk_app_start is called in Round 2. 00:07:16.164 Shutdown signal received, stop current app iteration 00:07:16.164 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 reinitialization... 00:07:16.164 spdk_app_start is called in Round 3. 00:07:16.164 Shutdown signal received, stop current app iteration 00:07:16.164 16:02:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:16.164 16:02:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:16.164 00:07:16.164 real 0m18.434s 00:07:16.164 user 0m42.456s 00:07:16.164 sys 0m2.493s 00:07:16.164 ************************************ 00:07:16.164 END TEST app_repeat 00:07:16.164 ************************************ 00:07:16.164 16:02:22 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.164 16:02:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.423 16:02:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:16.423 16:02:22 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:16.423 16:02:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.423 16:02:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.423 16:02:22 event -- common/autotest_common.sh@10 -- # set +x 00:07:16.423 ************************************ 00:07:16.423 START TEST cpu_locks 00:07:16.423 ************************************ 00:07:16.423 16:02:22 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:16.423 * Looking for test storage... 00:07:16.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:16.423 16:02:23 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:16.423 16:02:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:16.423 16:02:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:16.423 16:02:23 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:16.423 16:02:23 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:16.423 16:02:23 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:16.423 16:02:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:16.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.424 --rc genhtml_branch_coverage=1 00:07:16.424 --rc genhtml_function_coverage=1 00:07:16.424 --rc genhtml_legend=1 00:07:16.424 --rc geninfo_all_blocks=1 00:07:16.424 --rc geninfo_unexecuted_blocks=1 00:07:16.424 00:07:16.424 ' 00:07:16.424 16:02:23 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:16.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.424 --rc genhtml_branch_coverage=1 00:07:16.424 --rc genhtml_function_coverage=1 00:07:16.424 --rc genhtml_legend=1 00:07:16.424 --rc geninfo_all_blocks=1 00:07:16.424 --rc geninfo_unexecuted_blocks=1 00:07:16.424 00:07:16.424 ' 00:07:16.424 16:02:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:16.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.424 --rc genhtml_branch_coverage=1 00:07:16.424 --rc genhtml_function_coverage=1 00:07:16.424 --rc genhtml_legend=1 00:07:16.424 --rc geninfo_all_blocks=1 00:07:16.424 --rc geninfo_unexecuted_blocks=1 00:07:16.424 00:07:16.424 ' 00:07:16.424 16:02:23 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:16.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:16.424 --rc genhtml_branch_coverage=1 00:07:16.424 --rc genhtml_function_coverage=1 00:07:16.424 --rc genhtml_legend=1 00:07:16.424 --rc geninfo_all_blocks=1 00:07:16.424 --rc geninfo_unexecuted_blocks=1 00:07:16.424 00:07:16.424 ' 00:07:16.424 16:02:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:16.424 16:02:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:16.424 16:02:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:16.424 16:02:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:16.424 16:02:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.424 16:02:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.424 16:02:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.424 ************************************ 00:07:16.424 START TEST default_locks 00:07:16.424 ************************************ 00:07:16.424 16:02:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:16.424 16:02:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=71875 00:07:16.424 16:02:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 71875 00:07:16.424 16:02:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71875 ']' 00:07:16.424 16:02:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.424 16:02:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.424 16:02:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.424 16:02:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.424 16:02:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.424 16:02:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.683 [2024-11-19 16:02:23.189937] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:16.683 [2024-11-19 16:02:23.190056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71875 ] 00:07:16.683 [2024-11-19 16:02:23.333818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.683 [2024-11-19 16:02:23.354521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.683 [2024-11-19 16:02:23.392544] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:17.620 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.620 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:17.620 16:02:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 71875 00:07:17.620 16:02:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 71875 00:07:17.620 16:02:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:17.880 16:02:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 71875 00:07:17.880 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 71875 ']' 00:07:17.880 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 71875 00:07:17.880 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:17.880 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.880 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71875 00:07:17.880 killing process with pid 71875 00:07:17.880 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.880 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.880 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71875' 00:07:17.880 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 71875 00:07:17.880 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 71875 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 71875 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 71875 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 71875 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 71875 ']' 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.139 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (71875) - No such process 00:07:18.139 ERROR: process (pid: 71875) is no longer running 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:18.139 16:02:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:18.140 16:02:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:18.140 00:07:18.140 real 0m1.627s 00:07:18.140 user 0m1.868s 00:07:18.140 sys 0m0.405s 00:07:18.140 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.140 ************************************ 00:07:18.140 END TEST default_locks 00:07:18.140 ************************************ 00:07:18.140 16:02:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.140 16:02:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:18.140 16:02:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.140 16:02:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.140 16:02:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.140 ************************************ 00:07:18.140 START TEST default_locks_via_rpc 00:07:18.140 ************************************ 00:07:18.140 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:18.140 16:02:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=71922 00:07:18.140 16:02:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:18.140 16:02:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 71922 00:07:18.140 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 71922 ']' 00:07:18.140 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.140 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.140 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.140 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.140 16:02:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.399 [2024-11-19 16:02:24.853822] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:18.399 [2024-11-19 16:02:24.854109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71922 ] 00:07:18.399 [2024-11-19 16:02:25.000496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.399 [2024-11-19 16:02:25.020153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.399 [2024-11-19 16:02:25.059208] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 71922 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 71922 00:07:18.668 16:02:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.951 16:02:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 71922 00:07:18.951 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 71922 ']' 00:07:18.951 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 71922 00:07:18.951 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:18.951 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.951 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71922 00:07:19.213 killing process with pid 71922 00:07:19.213 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.213 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.213 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71922' 00:07:19.213 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 71922 00:07:19.213 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 71922 00:07:19.213 ************************************ 00:07:19.213 END TEST default_locks_via_rpc 00:07:19.213 ************************************ 00:07:19.213 00:07:19.213 real 0m1.089s 00:07:19.213 user 0m1.187s 00:07:19.213 sys 0m0.412s 00:07:19.213 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.213 16:02:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 16:02:25 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:19.472 16:02:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.472 16:02:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.472 16:02:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 ************************************ 00:07:19.472 START TEST non_locking_app_on_locked_coremask 00:07:19.472 ************************************ 00:07:19.472 16:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:19.472 16:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=71965 00:07:19.472 16:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 71965 /var/tmp/spdk.sock 00:07:19.472 16:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71965 ']' 00:07:19.472 16:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.472 16:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.472 16:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.472 16:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.472 16:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.472 16:02:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.472 [2024-11-19 16:02:26.012009] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:19.472 [2024-11-19 16:02:26.012104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71965 ] 00:07:19.472 [2024-11-19 16:02:26.154837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.472 [2024-11-19 16:02:26.176324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.731 [2024-11-19 16:02:26.213972] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.731 16:02:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.731 16:02:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:19.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.731 16:02:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=71968 00:07:19.731 16:02:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:19.731 16:02:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 71968 /var/tmp/spdk2.sock 00:07:19.731 16:02:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 71968 ']' 00:07:19.731 16:02:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.731 16:02:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.731 16:02:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.731 16:02:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.731 16:02:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:19.731 [2024-11-19 16:02:26.399316] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:19.731 [2024-11-19 16:02:26.399605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71968 ] 00:07:19.991 [2024-11-19 16:02:26.559735] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:19.991 [2024-11-19 16:02:26.559787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.991 [2024-11-19 16:02:26.599486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.991 [2024-11-19 16:02:26.675666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.927 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.927 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:20.927 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 71965 00:07:20.927 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71965 00:07:20.927 16:02:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.495 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 71965 00:07:21.495 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71965 ']' 00:07:21.495 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71965 00:07:21.495 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:21.495 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.495 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71965 00:07:21.495 killing process with pid 71965 00:07:21.495 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.495 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.495 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71965' 00:07:21.495 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71965 00:07:21.495 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71965 00:07:22.064 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 71968 00:07:22.064 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 71968 ']' 00:07:22.064 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 71968 00:07:22.064 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:22.064 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.064 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71968 00:07:22.064 killing process with pid 71968 00:07:22.064 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.064 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.064 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71968' 00:07:22.064 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 71968 00:07:22.064 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 71968 00:07:22.323 00:07:22.323 real 0m2.919s 00:07:22.323 user 0m3.504s 00:07:22.323 sys 0m0.855s 00:07:22.323 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.323 16:02:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.323 ************************************ 00:07:22.323 END TEST non_locking_app_on_locked_coremask 00:07:22.323 ************************************ 00:07:22.323 16:02:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:22.323 16:02:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.323 16:02:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.323 16:02:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:22.323 ************************************ 00:07:22.323 START TEST locking_app_on_unlocked_coremask 00:07:22.323 ************************************ 00:07:22.323 16:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:22.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.324 16:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=72030 00:07:22.324 16:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 72030 /var/tmp/spdk.sock 00:07:22.324 16:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:22.324 16:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72030 ']' 00:07:22.324 16:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.324 16:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.324 16:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.324 16:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.324 16:02:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.324 [2024-11-19 16:02:28.984371] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:22.324 [2024-11-19 16:02:28.984677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72030 ] 00:07:22.583 [2024-11-19 16:02:29.133546] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:22.583 [2024-11-19 16:02:29.133783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.583 [2024-11-19 16:02:29.155189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.583 [2024-11-19 16:02:29.191056] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.842 16:02:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:22.842 16:02:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:22.842 16:02:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=72038 00:07:22.842 16:02:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:22.842 16:02:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 72038 /var/tmp/spdk2.sock 00:07:22.842 16:02:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72038 ']' 00:07:22.842 16:02:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:22.842 16:02:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.842 16:02:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:22.842 16:02:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.842 16:02:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:22.842 [2024-11-19 16:02:29.377838] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:22.842 [2024-11-19 16:02:29.377951] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72038 ] 00:07:22.842 [2024-11-19 16:02:29.539573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.101 [2024-11-19 16:02:29.578211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.101 [2024-11-19 16:02:29.648968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:23.669 16:02:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.669 16:02:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:23.669 16:02:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 72038 00:07:23.669 16:02:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72038 00:07:23.669 16:02:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:24.605 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 72030 00:07:24.605 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72030 ']' 00:07:24.605 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72030 00:07:24.605 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:24.605 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:24.605 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72030 00:07:24.605 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.605 killing process with pid 72030 00:07:24.605 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.605 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72030' 00:07:24.605 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72030 00:07:24.605 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72030 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 72038 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72038 ']' 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 72038 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72038 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.175 killing process with pid 72038 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72038' 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 72038 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 72038 00:07:25.175 00:07:25.175 real 0m2.971s 00:07:25.175 user 0m3.493s 00:07:25.175 sys 0m0.858s 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.175 16:02:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.175 ************************************ 00:07:25.175 END TEST locking_app_on_unlocked_coremask 00:07:25.175 ************************************ 00:07:25.434 16:02:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:25.434 16:02:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.434 16:02:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.434 16:02:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:25.434 ************************************ 00:07:25.434 START TEST locking_app_on_locked_coremask 00:07:25.434 ************************************ 00:07:25.434 16:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:25.434 16:02:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=72100 00:07:25.434 16:02:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 72100 /var/tmp/spdk.sock 00:07:25.434 16:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72100 ']' 00:07:25.434 16:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.434 16:02:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:25.434 16:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.435 16:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.435 16:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.435 16:02:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.435 [2024-11-19 16:02:32.004995] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:25.435 [2024-11-19 16:02:32.005088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72100 ] 00:07:25.694 [2024-11-19 16:02:32.155710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.694 [2024-11-19 16:02:32.179870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.694 [2024-11-19 16:02:32.219381] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=72108 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 72108 /var/tmp/spdk2.sock 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72108 /var/tmp/spdk2.sock 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72108 /var/tmp/spdk2.sock 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 72108 ']' 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.694 16:02:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.954 [2024-11-19 16:02:32.410924] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:25.954 [2024-11-19 16:02:32.411013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72108 ] 00:07:25.954 [2024-11-19 16:02:32.576005] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 72100 has claimed it. 00:07:25.954 [2024-11-19 16:02:32.576068] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:26.522 ERROR: process (pid: 72108) is no longer running 00:07:26.522 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72108) - No such process 00:07:26.522 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.522 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:26.522 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:26.522 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:26.522 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:26.522 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:26.522 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 72100 00:07:26.522 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 72100 00:07:26.522 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 72100 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 72100 ']' 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 72100 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72100 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.090 killing process with pid 72100 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72100' 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 72100 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 72100 00:07:27.090 00:07:27.090 real 0m1.834s 00:07:27.090 user 0m2.181s 00:07:27.090 sys 0m0.495s 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.090 16:02:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.090 ************************************ 00:07:27.090 END TEST locking_app_on_locked_coremask 00:07:27.090 ************************************ 00:07:27.350 16:02:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:27.350 16:02:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.350 16:02:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.350 16:02:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.350 ************************************ 00:07:27.350 START TEST locking_overlapped_coremask 00:07:27.350 ************************************ 00:07:27.350 16:02:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:27.350 16:02:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=72148 00:07:27.350 16:02:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:27.350 16:02:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 72148 /var/tmp/spdk.sock 00:07:27.350 16:02:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72148 ']' 00:07:27.350 16:02:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.350 16:02:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.350 16:02:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.350 16:02:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.350 16:02:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.350 [2024-11-19 16:02:33.900603] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:27.350 [2024-11-19 16:02:33.900696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72148 ] 00:07:27.350 [2024-11-19 16:02:34.044027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.609 [2024-11-19 16:02:34.067187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.609 [2024-11-19 16:02:34.067885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.609 [2024-11-19 16:02:34.067932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.609 [2024-11-19 16:02:34.105197] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=72166 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 72166 /var/tmp/spdk2.sock 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 72166 /var/tmp/spdk2.sock 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 72166 /var/tmp/spdk2.sock 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 72166 ']' 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.177 16:02:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.437 [2024-11-19 16:02:34.944311] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:28.437 [2024-11-19 16:02:34.944418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72166 ] 00:07:28.437 [2024-11-19 16:02:35.101977] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72148 has claimed it. 00:07:28.437 [2024-11-19 16:02:35.106292] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:29.006 ERROR: process (pid: 72166) is no longer running 00:07:29.006 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (72166) - No such process 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 72148 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 72148 ']' 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 72148 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72148 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.006 killing process with pid 72148 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72148' 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 72148 00:07:29.006 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 72148 00:07:29.268 00:07:29.268 real 0m2.092s 00:07:29.268 user 0m6.120s 00:07:29.268 sys 0m0.337s 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.268 ************************************ 00:07:29.268 END TEST locking_overlapped_coremask 00:07:29.268 ************************************ 00:07:29.268 16:02:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:29.268 16:02:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.268 16:02:35 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.268 16:02:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.268 ************************************ 00:07:29.268 START TEST locking_overlapped_coremask_via_rpc 00:07:29.268 ************************************ 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=72212 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 72212 /var/tmp/spdk.sock 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72212 ']' 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.268 16:02:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.527 [2024-11-19 16:02:36.022788] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:29.527 [2024-11-19 16:02:36.022881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72212 ] 00:07:29.527 [2024-11-19 16:02:36.168738] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:29.527 [2024-11-19 16:02:36.168795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.527 [2024-11-19 16:02:36.192427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.527 [2024-11-19 16:02:36.192582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.527 [2024-11-19 16:02:36.192587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.527 [2024-11-19 16:02:36.232252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.786 16:02:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.786 16:02:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:29.786 16:02:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=72221 00:07:29.786 16:02:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 72221 /var/tmp/spdk2.sock 00:07:29.786 16:02:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:29.786 16:02:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72221 ']' 00:07:29.786 16:02:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.786 16:02:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.786 16:02:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.786 16:02:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.786 16:02:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.786 [2024-11-19 16:02:36.429598] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:29.786 [2024-11-19 16:02:36.429766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72221 ] 00:07:30.045 [2024-11-19 16:02:36.594264] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.045 [2024-11-19 16:02:36.597541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.045 [2024-11-19 16:02:36.639007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.045 [2024-11-19 16:02:36.642368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:30.045 [2024-11-19 16:02:36.642369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.045 [2024-11-19 16:02:36.714136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.982 [2024-11-19 16:02:37.465439] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 72212 has claimed it. 00:07:30.982 request: 00:07:30.982 { 00:07:30.982 "method": "framework_enable_cpumask_locks", 00:07:30.982 "req_id": 1 00:07:30.982 } 00:07:30.982 Got JSON-RPC error response 00:07:30.982 response: 00:07:30.982 { 00:07:30.982 "code": -32603, 00:07:30.982 "message": "Failed to claim CPU core: 2" 00:07:30.982 } 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:30.982 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 72212 /var/tmp/spdk.sock 00:07:30.983 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72212 ']' 00:07:30.983 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.983 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.983 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.983 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.983 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:31.241 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.241 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:31.241 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 72221 /var/tmp/spdk2.sock 00:07:31.241 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 72221 ']' 00:07:31.241 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:31.241 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.241 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:31.241 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.241 16:02:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.500 ************************************ 00:07:31.500 END TEST locking_overlapped_coremask_via_rpc 00:07:31.500 ************************************ 00:07:31.500 16:02:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.500 16:02:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:31.500 16:02:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:31.500 16:02:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:31.500 16:02:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:31.500 16:02:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:31.500 00:07:31.500 real 0m2.096s 00:07:31.500 user 0m1.292s 00:07:31.500 sys 0m0.156s 00:07:31.500 16:02:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.500 16:02:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.500 16:02:38 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:31.500 16:02:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72212 ]] 00:07:31.500 16:02:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72212 00:07:31.500 16:02:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72212 ']' 00:07:31.500 16:02:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72212 00:07:31.500 16:02:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:31.500 16:02:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.500 16:02:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72212 00:07:31.500 killing process with pid 72212 00:07:31.500 16:02:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.500 16:02:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.500 16:02:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72212' 00:07:31.500 16:02:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72212 00:07:31.500 16:02:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72212 00:07:31.760 16:02:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72221 ]] 00:07:31.760 16:02:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72221 00:07:31.760 16:02:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72221 ']' 00:07:31.760 16:02:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72221 00:07:31.760 16:02:38 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:31.760 16:02:38 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.760 16:02:38 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72221 00:07:31.760 killing process with pid 72221 00:07:31.760 16:02:38 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:31.760 16:02:38 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:31.760 16:02:38 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72221' 00:07:31.760 16:02:38 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 72221 00:07:31.760 16:02:38 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 72221 00:07:32.019 16:02:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:32.019 16:02:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:32.019 16:02:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 72212 ]] 00:07:32.019 16:02:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 72212 00:07:32.019 16:02:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72212 ']' 00:07:32.019 16:02:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72212 00:07:32.019 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72212) - No such process 00:07:32.019 Process with pid 72212 is not found 00:07:32.019 16:02:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72212 is not found' 00:07:32.019 16:02:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 72221 ]] 00:07:32.019 16:02:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 72221 00:07:32.019 16:02:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 72221 ']' 00:07:32.019 16:02:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 72221 00:07:32.019 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72221) - No such process 00:07:32.019 Process with pid 72221 is not found 00:07:32.019 16:02:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 72221 is not found' 00:07:32.019 16:02:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:32.019 00:07:32.019 real 0m15.706s 00:07:32.019 user 0m30.198s 00:07:32.019 sys 0m4.232s 00:07:32.019 16:02:38 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.019 16:02:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.019 ************************************ 00:07:32.019 END TEST cpu_locks 00:07:32.019 ************************************ 00:07:32.019 00:07:32.019 real 0m42.807s 00:07:32.019 user 1m27.698s 00:07:32.019 sys 0m7.455s 00:07:32.019 16:02:38 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.019 16:02:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:32.019 ************************************ 00:07:32.019 END TEST event 00:07:32.019 ************************************ 00:07:32.019 16:02:38 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:32.019 16:02:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.019 16:02:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.019 16:02:38 -- common/autotest_common.sh@10 -- # set +x 00:07:32.019 ************************************ 00:07:32.019 START TEST thread 00:07:32.019 ************************************ 00:07:32.019 16:02:38 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:32.277 * Looking for test storage... 00:07:32.277 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:32.277 16:02:38 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:32.277 16:02:38 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:32.277 16:02:38 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:32.277 16:02:38 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:32.277 16:02:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:32.277 16:02:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:32.277 16:02:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:32.277 16:02:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:32.277 16:02:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:32.277 16:02:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:32.277 16:02:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:32.277 16:02:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:32.277 16:02:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:32.277 16:02:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:32.277 16:02:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:32.277 16:02:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:32.277 16:02:38 thread -- scripts/common.sh@345 -- # : 1 00:07:32.277 16:02:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:32.277 16:02:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:32.278 16:02:38 thread -- scripts/common.sh@365 -- # decimal 1 00:07:32.278 16:02:38 thread -- scripts/common.sh@353 -- # local d=1 00:07:32.278 16:02:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:32.278 16:02:38 thread -- scripts/common.sh@355 -- # echo 1 00:07:32.278 16:02:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:32.278 16:02:38 thread -- scripts/common.sh@366 -- # decimal 2 00:07:32.278 16:02:38 thread -- scripts/common.sh@353 -- # local d=2 00:07:32.278 16:02:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:32.278 16:02:38 thread -- scripts/common.sh@355 -- # echo 2 00:07:32.278 16:02:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:32.278 16:02:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:32.278 16:02:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:32.278 16:02:38 thread -- scripts/common.sh@368 -- # return 0 00:07:32.278 16:02:38 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:32.278 16:02:38 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:32.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.278 --rc genhtml_branch_coverage=1 00:07:32.278 --rc genhtml_function_coverage=1 00:07:32.278 --rc genhtml_legend=1 00:07:32.278 --rc geninfo_all_blocks=1 00:07:32.278 --rc geninfo_unexecuted_blocks=1 00:07:32.278 00:07:32.278 ' 00:07:32.278 16:02:38 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:32.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.278 --rc genhtml_branch_coverage=1 00:07:32.278 --rc genhtml_function_coverage=1 00:07:32.278 --rc genhtml_legend=1 00:07:32.278 --rc geninfo_all_blocks=1 00:07:32.278 --rc geninfo_unexecuted_blocks=1 00:07:32.278 00:07:32.278 ' 00:07:32.278 16:02:38 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:32.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.278 --rc genhtml_branch_coverage=1 00:07:32.278 --rc genhtml_function_coverage=1 00:07:32.278 --rc genhtml_legend=1 00:07:32.278 --rc geninfo_all_blocks=1 00:07:32.278 --rc geninfo_unexecuted_blocks=1 00:07:32.278 00:07:32.278 ' 00:07:32.278 16:02:38 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:32.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:32.278 --rc genhtml_branch_coverage=1 00:07:32.278 --rc genhtml_function_coverage=1 00:07:32.278 --rc genhtml_legend=1 00:07:32.278 --rc geninfo_all_blocks=1 00:07:32.278 --rc geninfo_unexecuted_blocks=1 00:07:32.278 00:07:32.278 ' 00:07:32.278 16:02:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:32.278 16:02:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:32.278 16:02:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.278 16:02:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:32.278 ************************************ 00:07:32.278 START TEST thread_poller_perf 00:07:32.278 ************************************ 00:07:32.278 16:02:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:32.278 [2024-11-19 16:02:38.920344] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:32.278 [2024-11-19 16:02:38.920453] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72347 ] 00:07:32.537 [2024-11-19 16:02:39.068180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.537 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:32.537 [2024-11-19 16:02:39.086770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.473 [2024-11-19T16:02:40.188Z] ====================================== 00:07:33.473 [2024-11-19T16:02:40.188Z] busy:2206596418 (cyc) 00:07:33.473 [2024-11-19T16:02:40.188Z] total_run_count: 372000 00:07:33.473 [2024-11-19T16:02:40.188Z] tsc_hz: 2200000000 (cyc) 00:07:33.473 [2024-11-19T16:02:40.188Z] ====================================== 00:07:33.473 [2024-11-19T16:02:40.188Z] poller_cost: 5931 (cyc), 2695 (nsec) 00:07:33.473 00:07:33.473 real 0m1.222s 00:07:33.473 user 0m1.082s 00:07:33.473 sys 0m0.035s 00:07:33.473 16:02:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.473 ************************************ 00:07:33.473 END TEST thread_poller_perf 00:07:33.473 ************************************ 00:07:33.473 16:02:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:33.473 16:02:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:33.473 16:02:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:33.473 16:02:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.473 16:02:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:33.473 ************************************ 00:07:33.473 START TEST thread_poller_perf 00:07:33.473 ************************************ 00:07:33.473 16:02:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:33.732 [2024-11-19 16:02:40.196369] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:33.732 [2024-11-19 16:02:40.196473] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72383 ] 00:07:33.733 [2024-11-19 16:02:40.347403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.733 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:33.733 [2024-11-19 16:02:40.372127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.113 [2024-11-19T16:02:41.828Z] ====================================== 00:07:35.113 [2024-11-19T16:02:41.828Z] busy:2202602308 (cyc) 00:07:35.113 [2024-11-19T16:02:41.828Z] total_run_count: 4522000 00:07:35.113 [2024-11-19T16:02:41.828Z] tsc_hz: 2200000000 (cyc) 00:07:35.113 [2024-11-19T16:02:41.828Z] ====================================== 00:07:35.113 [2024-11-19T16:02:41.828Z] poller_cost: 487 (cyc), 221 (nsec) 00:07:35.113 00:07:35.113 real 0m1.227s 00:07:35.113 user 0m1.087s 00:07:35.113 sys 0m0.034s 00:07:35.113 16:02:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.113 ************************************ 00:07:35.113 END TEST thread_poller_perf 00:07:35.113 ************************************ 00:07:35.113 16:02:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.113 16:02:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:35.113 ************************************ 00:07:35.113 END TEST thread 00:07:35.113 ************************************ 00:07:35.113 00:07:35.113 real 0m2.742s 00:07:35.113 user 0m2.323s 00:07:35.113 sys 0m0.202s 00:07:35.113 16:02:41 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.113 16:02:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.113 16:02:41 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:35.113 16:02:41 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:35.113 16:02:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.113 16:02:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.113 16:02:41 -- common/autotest_common.sh@10 -- # set +x 00:07:35.113 ************************************ 00:07:35.113 START TEST app_cmdline 00:07:35.113 ************************************ 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:35.114 * Looking for test storage... 00:07:35.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.114 16:02:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:35.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.114 --rc genhtml_branch_coverage=1 00:07:35.114 --rc genhtml_function_coverage=1 00:07:35.114 --rc genhtml_legend=1 00:07:35.114 --rc geninfo_all_blocks=1 00:07:35.114 --rc geninfo_unexecuted_blocks=1 00:07:35.114 00:07:35.114 ' 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:35.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.114 --rc genhtml_branch_coverage=1 00:07:35.114 --rc genhtml_function_coverage=1 00:07:35.114 --rc genhtml_legend=1 00:07:35.114 --rc geninfo_all_blocks=1 00:07:35.114 --rc geninfo_unexecuted_blocks=1 00:07:35.114 00:07:35.114 ' 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:35.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.114 --rc genhtml_branch_coverage=1 00:07:35.114 --rc genhtml_function_coverage=1 00:07:35.114 --rc genhtml_legend=1 00:07:35.114 --rc geninfo_all_blocks=1 00:07:35.114 --rc geninfo_unexecuted_blocks=1 00:07:35.114 00:07:35.114 ' 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:35.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.114 --rc genhtml_branch_coverage=1 00:07:35.114 --rc genhtml_function_coverage=1 00:07:35.114 --rc genhtml_legend=1 00:07:35.114 --rc geninfo_all_blocks=1 00:07:35.114 --rc geninfo_unexecuted_blocks=1 00:07:35.114 00:07:35.114 ' 00:07:35.114 16:02:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:35.114 16:02:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=72465 00:07:35.114 16:02:41 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:35.114 16:02:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 72465 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 72465 ']' 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.114 16:02:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.114 [2024-11-19 16:02:41.754652] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:35.114 [2024-11-19 16:02:41.754772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72465 ] 00:07:35.373 [2024-11-19 16:02:41.902764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.373 [2024-11-19 16:02:41.921722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.373 [2024-11-19 16:02:41.956369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.373 16:02:42 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.373 16:02:42 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:35.373 16:02:42 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:35.941 { 00:07:35.941 "version": "SPDK v25.01-pre git sha1 dcc2ca8f3", 00:07:35.941 "fields": { 00:07:35.941 "major": 25, 00:07:35.941 "minor": 1, 00:07:35.941 "patch": 0, 00:07:35.941 "suffix": "-pre", 00:07:35.941 "commit": "dcc2ca8f3" 00:07:35.941 } 00:07:35.941 } 00:07:35.941 16:02:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:35.941 16:02:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:35.941 16:02:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:35.941 16:02:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:35.941 16:02:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:35.941 16:02:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:35.941 16:02:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.941 16:02:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:35.941 16:02:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:35.941 16:02:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:35.941 16:02:42 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:36.224 request: 00:07:36.224 { 00:07:36.224 "method": "env_dpdk_get_mem_stats", 00:07:36.224 "req_id": 1 00:07:36.224 } 00:07:36.224 Got JSON-RPC error response 00:07:36.224 response: 00:07:36.224 { 00:07:36.224 "code": -32601, 00:07:36.224 "message": "Method not found" 00:07:36.224 } 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.224 16:02:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 72465 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 72465 ']' 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 72465 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72465 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.224 killing process with pid 72465 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72465' 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@973 -- # kill 72465 00:07:36.224 16:02:42 app_cmdline -- common/autotest_common.sh@978 -- # wait 72465 00:07:36.500 00:07:36.500 real 0m1.451s 00:07:36.500 user 0m1.943s 00:07:36.500 sys 0m0.356s 00:07:36.500 16:02:42 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.500 16:02:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:36.500 ************************************ 00:07:36.500 END TEST app_cmdline 00:07:36.500 ************************************ 00:07:36.500 16:02:42 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:36.500 16:02:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.500 16:02:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.500 16:02:42 -- common/autotest_common.sh@10 -- # set +x 00:07:36.500 ************************************ 00:07:36.500 START TEST version 00:07:36.500 ************************************ 00:07:36.500 16:02:43 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:36.500 * Looking for test storage... 00:07:36.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:36.500 16:02:43 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.500 16:02:43 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.500 16:02:43 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.500 16:02:43 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.500 16:02:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.500 16:02:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.500 16:02:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.500 16:02:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.500 16:02:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.500 16:02:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.500 16:02:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.500 16:02:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.500 16:02:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.500 16:02:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.500 16:02:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.500 16:02:43 version -- scripts/common.sh@344 -- # case "$op" in 00:07:36.500 16:02:43 version -- scripts/common.sh@345 -- # : 1 00:07:36.500 16:02:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.500 16:02:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.500 16:02:43 version -- scripts/common.sh@365 -- # decimal 1 00:07:36.500 16:02:43 version -- scripts/common.sh@353 -- # local d=1 00:07:36.500 16:02:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.500 16:02:43 version -- scripts/common.sh@355 -- # echo 1 00:07:36.500 16:02:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.500 16:02:43 version -- scripts/common.sh@366 -- # decimal 2 00:07:36.500 16:02:43 version -- scripts/common.sh@353 -- # local d=2 00:07:36.500 16:02:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.500 16:02:43 version -- scripts/common.sh@355 -- # echo 2 00:07:36.500 16:02:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.500 16:02:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.500 16:02:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.500 16:02:43 version -- scripts/common.sh@368 -- # return 0 00:07:36.500 16:02:43 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.500 16:02:43 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.500 --rc genhtml_branch_coverage=1 00:07:36.500 --rc genhtml_function_coverage=1 00:07:36.500 --rc genhtml_legend=1 00:07:36.500 --rc geninfo_all_blocks=1 00:07:36.500 --rc geninfo_unexecuted_blocks=1 00:07:36.500 00:07:36.500 ' 00:07:36.500 16:02:43 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.500 --rc genhtml_branch_coverage=1 00:07:36.500 --rc genhtml_function_coverage=1 00:07:36.500 --rc genhtml_legend=1 00:07:36.500 --rc geninfo_all_blocks=1 00:07:36.500 --rc geninfo_unexecuted_blocks=1 00:07:36.500 00:07:36.500 ' 00:07:36.500 16:02:43 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.500 --rc genhtml_branch_coverage=1 00:07:36.500 --rc genhtml_function_coverage=1 00:07:36.500 --rc genhtml_legend=1 00:07:36.500 --rc geninfo_all_blocks=1 00:07:36.500 --rc geninfo_unexecuted_blocks=1 00:07:36.500 00:07:36.500 ' 00:07:36.500 16:02:43 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.500 --rc genhtml_branch_coverage=1 00:07:36.500 --rc genhtml_function_coverage=1 00:07:36.500 --rc genhtml_legend=1 00:07:36.500 --rc geninfo_all_blocks=1 00:07:36.500 --rc geninfo_unexecuted_blocks=1 00:07:36.500 00:07:36.500 ' 00:07:36.500 16:02:43 version -- app/version.sh@17 -- # get_header_version major 00:07:36.500 16:02:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:36.500 16:02:43 version -- app/version.sh@14 -- # cut -f2 00:07:36.500 16:02:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.500 16:02:43 version -- app/version.sh@17 -- # major=25 00:07:36.500 16:02:43 version -- app/version.sh@18 -- # get_header_version minor 00:07:36.500 16:02:43 version -- app/version.sh@14 -- # cut -f2 00:07:36.500 16:02:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:36.500 16:02:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.500 16:02:43 version -- app/version.sh@18 -- # minor=1 00:07:36.500 16:02:43 version -- app/version.sh@19 -- # get_header_version patch 00:07:36.500 16:02:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:36.500 16:02:43 version -- app/version.sh@14 -- # cut -f2 00:07:36.500 16:02:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.500 16:02:43 version -- app/version.sh@19 -- # patch=0 00:07:36.759 16:02:43 version -- app/version.sh@20 -- # get_header_version suffix 00:07:36.759 16:02:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:36.759 16:02:43 version -- app/version.sh@14 -- # cut -f2 00:07:36.759 16:02:43 version -- app/version.sh@14 -- # tr -d '"' 00:07:36.759 16:02:43 version -- app/version.sh@20 -- # suffix=-pre 00:07:36.759 16:02:43 version -- app/version.sh@22 -- # version=25.1 00:07:36.759 16:02:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:36.759 16:02:43 version -- app/version.sh@28 -- # version=25.1rc0 00:07:36.759 16:02:43 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:36.759 16:02:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:36.759 16:02:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:36.759 16:02:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:36.759 00:07:36.759 real 0m0.251s 00:07:36.759 user 0m0.165s 00:07:36.759 sys 0m0.119s 00:07:36.759 16:02:43 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.759 16:02:43 version -- common/autotest_common.sh@10 -- # set +x 00:07:36.759 ************************************ 00:07:36.759 END TEST version 00:07:36.759 ************************************ 00:07:36.759 16:02:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:36.759 16:02:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:36.759 16:02:43 -- spdk/autotest.sh@194 -- # uname -s 00:07:36.759 16:02:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:36.759 16:02:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:36.759 16:02:43 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:36.759 16:02:43 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:36.759 16:02:43 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:36.760 16:02:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.760 16:02:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.760 16:02:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.760 ************************************ 00:07:36.760 START TEST spdk_dd 00:07:36.760 ************************************ 00:07:36.760 16:02:43 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:36.760 * Looking for test storage... 00:07:36.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:36.760 16:02:43 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.760 16:02:43 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.760 16:02:43 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.019 16:02:43 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:37.019 16:02:43 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.019 16:02:43 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.019 --rc genhtml_branch_coverage=1 00:07:37.019 --rc genhtml_function_coverage=1 00:07:37.019 --rc genhtml_legend=1 00:07:37.019 --rc geninfo_all_blocks=1 00:07:37.019 --rc geninfo_unexecuted_blocks=1 00:07:37.019 00:07:37.019 ' 00:07:37.019 16:02:43 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.019 --rc genhtml_branch_coverage=1 00:07:37.019 --rc genhtml_function_coverage=1 00:07:37.019 --rc genhtml_legend=1 00:07:37.019 --rc geninfo_all_blocks=1 00:07:37.019 --rc geninfo_unexecuted_blocks=1 00:07:37.019 00:07:37.019 ' 00:07:37.019 16:02:43 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.019 --rc genhtml_branch_coverage=1 00:07:37.019 --rc genhtml_function_coverage=1 00:07:37.019 --rc genhtml_legend=1 00:07:37.019 --rc geninfo_all_blocks=1 00:07:37.019 --rc geninfo_unexecuted_blocks=1 00:07:37.019 00:07:37.019 ' 00:07:37.019 16:02:43 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.019 --rc genhtml_branch_coverage=1 00:07:37.019 --rc genhtml_function_coverage=1 00:07:37.019 --rc genhtml_legend=1 00:07:37.019 --rc geninfo_all_blocks=1 00:07:37.019 --rc geninfo_unexecuted_blocks=1 00:07:37.019 00:07:37.019 ' 00:07:37.019 16:02:43 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.019 16:02:43 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.019 16:02:43 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.019 16:02:43 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.019 16:02:43 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.019 16:02:43 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:37.019 16:02:43 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.019 16:02:43 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:37.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:37.279 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:37.279 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:37.279 16:02:43 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:37.279 16:02:43 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:37.279 16:02:43 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:37.279 16:02:43 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:37.279 16:02:43 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:37.280 16:02:43 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:37.280 16:02:43 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:37.280 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:37.281 * spdk_dd linked to liburing 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:37.281 16:02:43 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:37.281 16:02:43 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:37.282 16:02:43 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:37.282 16:02:43 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:37.282 16:02:43 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:37.282 16:02:43 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:37.282 16:02:43 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:37.282 16:02:43 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:37.282 16:02:43 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:37.282 16:02:43 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:37.282 16:02:43 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.282 16:02:43 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:37.542 ************************************ 00:07:37.542 START TEST spdk_dd_basic_rw 00:07:37.542 ************************************ 00:07:37.542 16:02:43 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:37.542 * Looking for test storage... 00:07:37.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.542 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:37.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.542 --rc genhtml_branch_coverage=1 00:07:37.542 --rc genhtml_function_coverage=1 00:07:37.542 --rc genhtml_legend=1 00:07:37.542 --rc geninfo_all_blocks=1 00:07:37.543 --rc geninfo_unexecuted_blocks=1 00:07:37.543 00:07:37.543 ' 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:37.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.543 --rc genhtml_branch_coverage=1 00:07:37.543 --rc genhtml_function_coverage=1 00:07:37.543 --rc genhtml_legend=1 00:07:37.543 --rc geninfo_all_blocks=1 00:07:37.543 --rc geninfo_unexecuted_blocks=1 00:07:37.543 00:07:37.543 ' 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:37.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.543 --rc genhtml_branch_coverage=1 00:07:37.543 --rc genhtml_function_coverage=1 00:07:37.543 --rc genhtml_legend=1 00:07:37.543 --rc geninfo_all_blocks=1 00:07:37.543 --rc geninfo_unexecuted_blocks=1 00:07:37.543 00:07:37.543 ' 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:37.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.543 --rc genhtml_branch_coverage=1 00:07:37.543 --rc genhtml_function_coverage=1 00:07:37.543 --rc genhtml_legend=1 00:07:37.543 --rc geninfo_all_blocks=1 00:07:37.543 --rc geninfo_unexecuted_blocks=1 00:07:37.543 00:07:37.543 ' 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:37.543 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:37.804 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:37.804 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:37.805 ************************************ 00:07:37.805 START TEST dd_bs_lt_native_bs 00:07:37.805 ************************************ 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:37.805 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:37.805 { 00:07:37.805 "subsystems": [ 00:07:37.805 { 00:07:37.805 "subsystem": "bdev", 00:07:37.805 "config": [ 00:07:37.805 { 00:07:37.805 "params": { 00:07:37.805 "trtype": "pcie", 00:07:37.805 "traddr": "0000:00:10.0", 00:07:37.805 "name": "Nvme0" 00:07:37.805 }, 00:07:37.805 "method": "bdev_nvme_attach_controller" 00:07:37.805 }, 00:07:37.805 { 00:07:37.805 "method": "bdev_wait_for_examine" 00:07:37.805 } 00:07:37.805 ] 00:07:37.805 } 00:07:37.805 ] 00:07:37.805 } 00:07:37.805 [2024-11-19 16:02:44.469905] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:37.805 [2024-11-19 16:02:44.470009] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72804 ] 00:07:38.064 [2024-11-19 16:02:44.622981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.064 [2024-11-19 16:02:44.646911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.064 [2024-11-19 16:02:44.682047] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:38.064 [2024-11-19 16:02:44.775623] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:38.064 [2024-11-19 16:02:44.775701] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.323 [2024-11-19 16:02:44.848430] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.323 00:07:38.323 real 0m0.495s 00:07:38.323 user 0m0.332s 00:07:38.323 sys 0m0.119s 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.323 ************************************ 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:38.323 END TEST dd_bs_lt_native_bs 00:07:38.323 ************************************ 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.323 ************************************ 00:07:38.323 START TEST dd_rw 00:07:38.323 ************************************ 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:38.323 16:02:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.891 16:02:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:38.891 16:02:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:38.891 16:02:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.891 16:02:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.150 [2024-11-19 16:02:45.613513] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:39.150 [2024-11-19 16:02:45.613611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72835 ] 00:07:39.150 { 00:07:39.150 "subsystems": [ 00:07:39.150 { 00:07:39.150 "subsystem": "bdev", 00:07:39.150 "config": [ 00:07:39.150 { 00:07:39.150 "params": { 00:07:39.150 "trtype": "pcie", 00:07:39.150 "traddr": "0000:00:10.0", 00:07:39.150 "name": "Nvme0" 00:07:39.150 }, 00:07:39.150 "method": "bdev_nvme_attach_controller" 00:07:39.150 }, 00:07:39.150 { 00:07:39.150 "method": "bdev_wait_for_examine" 00:07:39.150 } 00:07:39.150 ] 00:07:39.150 } 00:07:39.150 ] 00:07:39.150 } 00:07:39.150 [2024-11-19 16:02:45.764149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.150 [2024-11-19 16:02:45.787494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.150 [2024-11-19 16:02:45.820375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.409  [2024-11-19T16:02:46.124Z] Copying: 60/60 [kB] (average 29 MBps) 00:07:39.409 00:07:39.409 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:39.409 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:39.409 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.409 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.409 [2024-11-19 16:02:46.085282] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:39.409 [2024-11-19 16:02:46.085384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72848 ] 00:07:39.409 { 00:07:39.409 "subsystems": [ 00:07:39.409 { 00:07:39.409 "subsystem": "bdev", 00:07:39.409 "config": [ 00:07:39.409 { 00:07:39.409 "params": { 00:07:39.409 "trtype": "pcie", 00:07:39.409 "traddr": "0000:00:10.0", 00:07:39.409 "name": "Nvme0" 00:07:39.409 }, 00:07:39.409 "method": "bdev_nvme_attach_controller" 00:07:39.409 }, 00:07:39.409 { 00:07:39.409 "method": "bdev_wait_for_examine" 00:07:39.409 } 00:07:39.409 ] 00:07:39.409 } 00:07:39.409 ] 00:07:39.409 } 00:07:39.668 [2024-11-19 16:02:46.231457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.668 [2024-11-19 16:02:46.248961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.668 [2024-11-19 16:02:46.279176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.668  [2024-11-19T16:02:46.642Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:39.927 00:07:39.927 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:39.927 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:39.927 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:39.927 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:39.927 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:39.927 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:39.927 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:39.927 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:39.927 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:39.927 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:39.927 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:39.927 { 00:07:39.927 "subsystems": [ 00:07:39.927 { 00:07:39.927 "subsystem": "bdev", 00:07:39.927 "config": [ 00:07:39.927 { 00:07:39.927 "params": { 00:07:39.927 "trtype": "pcie", 00:07:39.927 "traddr": "0000:00:10.0", 00:07:39.927 "name": "Nvme0" 00:07:39.927 }, 00:07:39.927 "method": "bdev_nvme_attach_controller" 00:07:39.927 }, 00:07:39.927 { 00:07:39.927 "method": "bdev_wait_for_examine" 00:07:39.927 } 00:07:39.927 ] 00:07:39.927 } 00:07:39.927 ] 00:07:39.927 } 00:07:39.927 [2024-11-19 16:02:46.546040] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:39.927 [2024-11-19 16:02:46.546134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72864 ] 00:07:40.187 [2024-11-19 16:02:46.692767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.187 [2024-11-19 16:02:46.711642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.187 [2024-11-19 16:02:46.737975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.187  [2024-11-19T16:02:47.161Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:40.446 00:07:40.446 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:40.446 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:40.446 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:40.446 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:40.446 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:40.446 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:40.446 16:02:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.014 16:02:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:41.014 16:02:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:41.014 16:02:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.014 16:02:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.014 { 00:07:41.014 "subsystems": [ 00:07:41.014 { 00:07:41.014 "subsystem": "bdev", 00:07:41.014 "config": [ 00:07:41.014 { 00:07:41.014 "params": { 00:07:41.014 "trtype": "pcie", 00:07:41.014 "traddr": "0000:00:10.0", 00:07:41.014 "name": "Nvme0" 00:07:41.014 }, 00:07:41.014 "method": "bdev_nvme_attach_controller" 00:07:41.014 }, 00:07:41.014 { 00:07:41.014 "method": "bdev_wait_for_examine" 00:07:41.014 } 00:07:41.014 ] 00:07:41.014 } 00:07:41.014 ] 00:07:41.014 } 00:07:41.014 [2024-11-19 16:02:47.519235] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:41.014 [2024-11-19 16:02:47.519371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72883 ] 00:07:41.014 [2024-11-19 16:02:47.665849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.014 [2024-11-19 16:02:47.683719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.014 [2024-11-19 16:02:47.710293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.272  [2024-11-19T16:02:47.987Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:41.272 00:07:41.272 16:02:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:41.272 16:02:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:41.272 16:02:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.272 16:02:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.272 { 00:07:41.272 "subsystems": [ 00:07:41.272 { 00:07:41.272 "subsystem": "bdev", 00:07:41.272 "config": [ 00:07:41.272 { 00:07:41.272 "params": { 00:07:41.272 "trtype": "pcie", 00:07:41.272 "traddr": "0000:00:10.0", 00:07:41.272 "name": "Nvme0" 00:07:41.272 }, 00:07:41.272 "method": "bdev_nvme_attach_controller" 00:07:41.272 }, 00:07:41.272 { 00:07:41.272 "method": "bdev_wait_for_examine" 00:07:41.272 } 00:07:41.272 ] 00:07:41.272 } 00:07:41.272 ] 00:07:41.272 } 00:07:41.272 [2024-11-19 16:02:47.974076] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:41.272 [2024-11-19 16:02:47.974175] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72896 ] 00:07:41.531 [2024-11-19 16:02:48.119029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.531 [2024-11-19 16:02:48.137364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.531 [2024-11-19 16:02:48.164088] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:41.790  [2024-11-19T16:02:48.505Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:41.790 00:07:41.790 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.790 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:41.790 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:41.790 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:41.790 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:41.790 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:41.790 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:41.790 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:41.790 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:41.790 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.790 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.790 { 00:07:41.790 "subsystems": [ 00:07:41.790 { 00:07:41.790 "subsystem": "bdev", 00:07:41.790 "config": [ 00:07:41.790 { 00:07:41.790 "params": { 00:07:41.790 "trtype": "pcie", 00:07:41.790 "traddr": "0000:00:10.0", 00:07:41.790 "name": "Nvme0" 00:07:41.790 }, 00:07:41.790 "method": "bdev_nvme_attach_controller" 00:07:41.790 }, 00:07:41.790 { 00:07:41.790 "method": "bdev_wait_for_examine" 00:07:41.790 } 00:07:41.790 ] 00:07:41.790 } 00:07:41.790 ] 00:07:41.790 } 00:07:41.790 [2024-11-19 16:02:48.427067] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:41.790 [2024-11-19 16:02:48.427180] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72912 ] 00:07:42.050 [2024-11-19 16:02:48.576296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.050 [2024-11-19 16:02:48.594806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.050 [2024-11-19 16:02:48.621511] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.050  [2024-11-19T16:02:49.023Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:42.308 00:07:42.308 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:42.308 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:42.308 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:42.308 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:42.308 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:42.308 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:42.309 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:42.309 16:02:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.877 16:02:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:42.877 16:02:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:42.877 16:02:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.877 16:02:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.877 [2024-11-19 16:02:49.405894] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:42.877 [2024-11-19 16:02:49.406131] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72931 ] 00:07:42.877 { 00:07:42.877 "subsystems": [ 00:07:42.877 { 00:07:42.877 "subsystem": "bdev", 00:07:42.877 "config": [ 00:07:42.877 { 00:07:42.877 "params": { 00:07:42.877 "trtype": "pcie", 00:07:42.877 "traddr": "0000:00:10.0", 00:07:42.877 "name": "Nvme0" 00:07:42.877 }, 00:07:42.877 "method": "bdev_nvme_attach_controller" 00:07:42.877 }, 00:07:42.877 { 00:07:42.877 "method": "bdev_wait_for_examine" 00:07:42.877 } 00:07:42.877 ] 00:07:42.877 } 00:07:42.877 ] 00:07:42.877 } 00:07:42.877 [2024-11-19 16:02:49.550806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.877 [2024-11-19 16:02:49.570513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.151 [2024-11-19 16:02:49.598706] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.151  [2024-11-19T16:02:49.866Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:43.151 00:07:43.151 16:02:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:43.151 16:02:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:43.151 16:02:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.151 16:02:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.151 [2024-11-19 16:02:49.847701] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:43.151 [2024-11-19 16:02:49.847807] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72939 ] 00:07:43.151 { 00:07:43.151 "subsystems": [ 00:07:43.151 { 00:07:43.151 "subsystem": "bdev", 00:07:43.151 "config": [ 00:07:43.151 { 00:07:43.151 "params": { 00:07:43.151 "trtype": "pcie", 00:07:43.151 "traddr": "0000:00:10.0", 00:07:43.151 "name": "Nvme0" 00:07:43.151 }, 00:07:43.151 "method": "bdev_nvme_attach_controller" 00:07:43.151 }, 00:07:43.151 { 00:07:43.151 "method": "bdev_wait_for_examine" 00:07:43.151 } 00:07:43.151 ] 00:07:43.151 } 00:07:43.151 ] 00:07:43.151 } 00:07:43.411 [2024-11-19 16:02:49.996027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.411 [2024-11-19 16:02:50.016100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.411 [2024-11-19 16:02:50.045565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.669  [2024-11-19T16:02:50.384Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:43.669 00:07:43.669 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.669 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:43.669 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:43.669 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:43.669 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:43.669 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:43.669 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:43.669 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:43.669 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:43.669 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.669 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.669 { 00:07:43.669 "subsystems": [ 00:07:43.669 { 00:07:43.669 "subsystem": "bdev", 00:07:43.669 "config": [ 00:07:43.669 { 00:07:43.669 "params": { 00:07:43.669 "trtype": "pcie", 00:07:43.669 "traddr": "0000:00:10.0", 00:07:43.669 "name": "Nvme0" 00:07:43.669 }, 00:07:43.669 "method": "bdev_nvme_attach_controller" 00:07:43.669 }, 00:07:43.669 { 00:07:43.669 "method": "bdev_wait_for_examine" 00:07:43.669 } 00:07:43.669 ] 00:07:43.669 } 00:07:43.669 ] 00:07:43.669 } 00:07:43.669 [2024-11-19 16:02:50.316709] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:43.669 [2024-11-19 16:02:50.316800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72960 ] 00:07:43.928 [2024-11-19 16:02:50.465408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.928 [2024-11-19 16:02:50.486052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.928 [2024-11-19 16:02:50.515296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.928  [2024-11-19T16:02:50.902Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:44.187 00:07:44.187 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:44.187 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:44.187 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:44.187 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:44.187 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:44.187 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:44.187 16:02:50 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.755 16:02:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:44.755 16:02:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:44.755 16:02:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:44.755 16:02:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.755 { 00:07:44.755 "subsystems": [ 00:07:44.755 { 00:07:44.755 "subsystem": "bdev", 00:07:44.755 "config": [ 00:07:44.755 { 00:07:44.755 "params": { 00:07:44.755 "trtype": "pcie", 00:07:44.755 "traddr": "0000:00:10.0", 00:07:44.755 "name": "Nvme0" 00:07:44.755 }, 00:07:44.755 "method": "bdev_nvme_attach_controller" 00:07:44.755 }, 00:07:44.755 { 00:07:44.755 "method": "bdev_wait_for_examine" 00:07:44.755 } 00:07:44.755 ] 00:07:44.755 } 00:07:44.755 ] 00:07:44.755 } 00:07:44.755 [2024-11-19 16:02:51.314668] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:44.755 [2024-11-19 16:02:51.314920] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72979 ] 00:07:44.755 [2024-11-19 16:02:51.463609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.014 [2024-11-19 16:02:51.482443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.014 [2024-11-19 16:02:51.511575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.014  [2024-11-19T16:02:51.729Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:45.014 00:07:45.014 16:02:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:45.014 16:02:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:45.014 16:02:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.014 16:02:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.274 { 00:07:45.274 "subsystems": [ 00:07:45.274 { 00:07:45.274 "subsystem": "bdev", 00:07:45.274 "config": [ 00:07:45.274 { 00:07:45.274 "params": { 00:07:45.274 "trtype": "pcie", 00:07:45.274 "traddr": "0000:00:10.0", 00:07:45.274 "name": "Nvme0" 00:07:45.274 }, 00:07:45.274 "method": "bdev_nvme_attach_controller" 00:07:45.274 }, 00:07:45.274 { 00:07:45.274 "method": "bdev_wait_for_examine" 00:07:45.274 } 00:07:45.274 ] 00:07:45.274 } 00:07:45.274 ] 00:07:45.274 } 00:07:45.274 [2024-11-19 16:02:51.768514] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:45.274 [2024-11-19 16:02:51.768608] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72987 ] 00:07:45.274 [2024-11-19 16:02:51.917465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.274 [2024-11-19 16:02:51.935441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.274 [2024-11-19 16:02:51.962877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.533  [2024-11-19T16:02:52.248Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:45.534 00:07:45.534 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:45.534 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:45.534 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:45.534 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:45.534 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:45.534 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:45.534 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:45.534 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:45.534 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:45.534 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.534 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.534 [2024-11-19 16:02:52.221723] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:45.534 [2024-11-19 16:02:52.221965] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73002 ] 00:07:45.534 { 00:07:45.534 "subsystems": [ 00:07:45.534 { 00:07:45.534 "subsystem": "bdev", 00:07:45.534 "config": [ 00:07:45.534 { 00:07:45.534 "params": { 00:07:45.534 "trtype": "pcie", 00:07:45.534 "traddr": "0000:00:10.0", 00:07:45.534 "name": "Nvme0" 00:07:45.534 }, 00:07:45.534 "method": "bdev_nvme_attach_controller" 00:07:45.534 }, 00:07:45.534 { 00:07:45.534 "method": "bdev_wait_for_examine" 00:07:45.534 } 00:07:45.534 ] 00:07:45.534 } 00:07:45.534 ] 00:07:45.534 } 00:07:45.793 [2024-11-19 16:02:52.367459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.793 [2024-11-19 16:02:52.385014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.793 [2024-11-19 16:02:52.411541] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.793  [2024-11-19T16:02:52.768Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:46.053 00:07:46.053 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:46.053 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:46.053 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:46.053 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:46.053 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:46.053 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:46.053 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:46.053 16:02:52 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.621 16:02:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:46.621 16:02:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:46.621 16:02:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.621 16:02:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.621 [2024-11-19 16:02:53.151772] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:46.622 [2024-11-19 16:02:53.152045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73021 ] 00:07:46.622 { 00:07:46.622 "subsystems": [ 00:07:46.622 { 00:07:46.622 "subsystem": "bdev", 00:07:46.622 "config": [ 00:07:46.622 { 00:07:46.622 "params": { 00:07:46.622 "trtype": "pcie", 00:07:46.622 "traddr": "0000:00:10.0", 00:07:46.622 "name": "Nvme0" 00:07:46.622 }, 00:07:46.622 "method": "bdev_nvme_attach_controller" 00:07:46.622 }, 00:07:46.622 { 00:07:46.622 "method": "bdev_wait_for_examine" 00:07:46.622 } 00:07:46.622 ] 00:07:46.622 } 00:07:46.622 ] 00:07:46.622 } 00:07:46.622 [2024-11-19 16:02:53.302297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.622 [2024-11-19 16:02:53.322222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.881 [2024-11-19 16:02:53.350976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.881  [2024-11-19T16:02:53.596Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:46.881 00:07:46.881 16:02:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:46.881 16:02:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:46.881 16:02:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.881 16:02:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.140 { 00:07:47.140 "subsystems": [ 00:07:47.140 { 00:07:47.140 "subsystem": "bdev", 00:07:47.140 "config": [ 00:07:47.140 { 00:07:47.140 "params": { 00:07:47.140 "trtype": "pcie", 00:07:47.140 "traddr": "0000:00:10.0", 00:07:47.140 "name": "Nvme0" 00:07:47.140 }, 00:07:47.140 "method": "bdev_nvme_attach_controller" 00:07:47.140 }, 00:07:47.140 { 00:07:47.140 "method": "bdev_wait_for_examine" 00:07:47.140 } 00:07:47.140 ] 00:07:47.140 } 00:07:47.140 ] 00:07:47.140 } 00:07:47.140 [2024-11-19 16:02:53.610004] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:47.140 [2024-11-19 16:02:53.610116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73035 ] 00:07:47.140 [2024-11-19 16:02:53.759901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.140 [2024-11-19 16:02:53.778653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.140 [2024-11-19 16:02:53.806337] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.399  [2024-11-19T16:02:54.114Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:47.399 00:07:47.399 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:47.399 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:47.399 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:47.399 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:47.399 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:47.399 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:47.399 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:47.399 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:47.399 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:47.399 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:47.399 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.399 { 00:07:47.399 "subsystems": [ 00:07:47.399 { 00:07:47.399 "subsystem": "bdev", 00:07:47.399 "config": [ 00:07:47.399 { 00:07:47.399 "params": { 00:07:47.399 "trtype": "pcie", 00:07:47.399 "traddr": "0000:00:10.0", 00:07:47.399 "name": "Nvme0" 00:07:47.399 }, 00:07:47.399 "method": "bdev_nvme_attach_controller" 00:07:47.399 }, 00:07:47.399 { 00:07:47.399 "method": "bdev_wait_for_examine" 00:07:47.399 } 00:07:47.399 ] 00:07:47.399 } 00:07:47.399 ] 00:07:47.399 } 00:07:47.399 [2024-11-19 16:02:54.070280] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:47.399 [2024-11-19 16:02:54.070367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73045 ] 00:07:47.659 [2024-11-19 16:02:54.215571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.659 [2024-11-19 16:02:54.233058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.659 [2024-11-19 16:02:54.261438] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.659  [2024-11-19T16:02:54.633Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:47.918 00:07:47.918 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:47.918 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:47.918 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:47.918 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:47.918 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:47.918 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:47.918 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.487 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:48.487 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:48.487 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.487 16:02:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.487 [2024-11-19 16:02:54.975707] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:48.487 [2024-11-19 16:02:54.975994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73064 ] 00:07:48.487 { 00:07:48.487 "subsystems": [ 00:07:48.487 { 00:07:48.487 "subsystem": "bdev", 00:07:48.487 "config": [ 00:07:48.487 { 00:07:48.487 "params": { 00:07:48.487 "trtype": "pcie", 00:07:48.487 "traddr": "0000:00:10.0", 00:07:48.487 "name": "Nvme0" 00:07:48.487 }, 00:07:48.487 "method": "bdev_nvme_attach_controller" 00:07:48.487 }, 00:07:48.487 { 00:07:48.487 "method": "bdev_wait_for_examine" 00:07:48.487 } 00:07:48.487 ] 00:07:48.487 } 00:07:48.487 ] 00:07:48.487 } 00:07:48.487 [2024-11-19 16:02:55.121666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.487 [2024-11-19 16:02:55.142761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.487 [2024-11-19 16:02:55.171419] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.746  [2024-11-19T16:02:55.461Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:48.746 00:07:48.747 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:48.747 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:48.747 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.747 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.747 { 00:07:48.747 "subsystems": [ 00:07:48.747 { 00:07:48.747 "subsystem": "bdev", 00:07:48.747 "config": [ 00:07:48.747 { 00:07:48.747 "params": { 00:07:48.747 "trtype": "pcie", 00:07:48.747 "traddr": "0000:00:10.0", 00:07:48.747 "name": "Nvme0" 00:07:48.747 }, 00:07:48.747 "method": "bdev_nvme_attach_controller" 00:07:48.747 }, 00:07:48.747 { 00:07:48.747 "method": "bdev_wait_for_examine" 00:07:48.747 } 00:07:48.747 ] 00:07:48.747 } 00:07:48.747 ] 00:07:48.747 } 00:07:48.747 [2024-11-19 16:02:55.426578] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:48.747 [2024-11-19 16:02:55.426692] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73083 ] 00:07:49.006 [2024-11-19 16:02:55.571224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.006 [2024-11-19 16:02:55.588847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.006 [2024-11-19 16:02:55.615944] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.006  [2024-11-19T16:02:55.981Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:49.266 00:07:49.266 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.266 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:49.266 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:49.266 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:49.266 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:49.266 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:49.266 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:49.266 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:49.266 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:49.266 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.266 16:02:55 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.266 { 00:07:49.266 "subsystems": [ 00:07:49.266 { 00:07:49.266 "subsystem": "bdev", 00:07:49.266 "config": [ 00:07:49.266 { 00:07:49.266 "params": { 00:07:49.266 "trtype": "pcie", 00:07:49.266 "traddr": "0000:00:10.0", 00:07:49.266 "name": "Nvme0" 00:07:49.266 }, 00:07:49.266 "method": "bdev_nvme_attach_controller" 00:07:49.266 }, 00:07:49.266 { 00:07:49.266 "method": "bdev_wait_for_examine" 00:07:49.266 } 00:07:49.266 ] 00:07:49.266 } 00:07:49.266 ] 00:07:49.266 } 00:07:49.266 [2024-11-19 16:02:55.878001] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:49.266 [2024-11-19 16:02:55.878092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73093 ] 00:07:49.526 [2024-11-19 16:02:56.023318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.526 [2024-11-19 16:02:56.040853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.526 [2024-11-19 16:02:56.067224] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:49.526  [2024-11-19T16:02:56.501Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:49.786 00:07:49.786 00:07:49.786 real 0m11.319s 00:07:49.786 user 0m8.370s 00:07:49.786 sys 0m3.502s 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.786 ************************************ 00:07:49.786 END TEST dd_rw 00:07:49.786 ************************************ 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 ************************************ 00:07:49.786 START TEST dd_rw_offset 00:07:49.786 ************************************ 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:49.786 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:49.787 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=ukl5pa1570f03mft6vyya8xgm2d2oejwqk0kxt80izapjwmf3v39e50sn7q1d78evtpj7yd98bzq8llnik9n940np7lg3p8rxjdu0u0o6fu63hmlhcdf70gpdprzfpivpdpx9uxxag9pak52c6jo4ooykaobmctoc2ft5pvpw5tg53wzgamzffxyx4nv201c2u6f0dzjo50i4zy9lc3cyavsog4e309ghbr7pzln38xubxenhzqwuay9fk39pdo5k5kenc74gsuy49th1nojlfdcysocqe85cw5wluc5klrrqqr032scfqbrprleu0v6u6am2jcr90p5n11hbw8ntdopjqlegcnr4gojkgki6hayp569o9n8f1ew5zigabmh72mmypubqdjdyd4umu4oysalzze9k7618rm0omqdryzo08b8v04hf1w95wedgulj7xqryt49uwxecnngbxe9iqum5idn3e1buj9aquipeqc42ti1rb3wouyrn4jucg2tlqbyki5lhwv55875skbd6vpdaeprf7xdont06gfxei782h6noo1wfzvu1w374s2fqax7phkuzc2icz5gvw7gaam7490ox9241gqftnbamq7sucgopjx6qyl31cc5nsmh6quby40nhmsxez75r2vx06jgb21v68ujx4fphjohrno02ruomq94lj613eqxzw44mp4u9z46h7ztbrzh9h8rzj23uatlzqpat10zs3d2045n946xppv4role80olthhhyfi6jpajdzds422gab7nofa2f6zaqtbvqijddehxkbbqhyzmhqs1a70368lcpy6h98bsbhhbk0xnle90phf5m4mw0px99it132b2rx9nxoyeh5k42gyx4bzjj1do3cteu48w726mplef25ponprf1ecpomo2iq7tl1u4mczttrhx4sti5rsaoh3ej8bvr8y5wb11bvlrzwozgq8oig6izwhb60krddmmk1s67eyjtgx0f97vvvz4nlwjhnju5ndp5nqrqh2vubukzukniq1u339wtzxirz8c2t2yagtl772gdixmwyqufmrkpp17k6mzq80qjfru4vnmq32dgoc5wqkj3e5ixxgoaxcoz3ria56lqdxocj60cmmraovznd4oavcgmfe9cvru3npy13ao1cqun8lja9a810nwaotay694ot2t2fuh8yb35hwwmczyooytiw91rx3lc00uzy74a8dyo2hb5mb6f42xhze1giuza8txbzp8uh60q8p1r6cde66eu9gd51ip679r8av3d07t8sjw36vdck2owa84kgk14t3xt03nw7e0td56l7a1m066tkuvvlisosnmmcog9dts24bq1ebo5egvkqf6nwe8fearl9eor8kqznv0njx28qv56opj2372ug0gcaluv7h67z6uh8bjsp862olixvborzt5lr2jcuxqqse8qnwplsjlsrvxgimc726emrt2svc42pqv8pepyl1nhfgc1qkb0e07zu0qlpj2zalp3n18fzk31iu4xsb4odxlvddzosl3v7obzgar1udmgi2clkcpr9pamrintt34cv1m2i49luk855ttazd9g65jnvwk2zxcv8op5wvhi08c3htbiqko8zjyuuvdygoycrm1spcthq60ka01qapy625z7dl4y7snfobgt5oop3wqyrnsj3hr61grmegyins615xpuu2is6q200w81kr6uk6mn2rbaluq164gdwxr15fgsdlsnlwsoqskxt8u0paoqj8eoaind88jgni6uukqi22uabln3q1n1em20fa74pnarbgk4bnu95b0opzb5kj9bwvt11btz2473d3v786x4046a862xtj4bjy0ff5fv02wkm8pcoeq73gqtjax213kks2ocoy9jtmet3cfuqtzasq0f2qwv458ruudkixua1nlvcujjzg0zbmoe1ssi75qqpc0hxz2c1shggo7usw9bwtd3gdgwatbbin4d99ai4x74bvfk9yjh2e0gomdox76kjgv7su2vn6y2zyj9l11czscn2pco2uhs15d4mqcocylili4im319w18lpw8apyg1577g5cjb871f5vix8kmjil7dkt70u8foxb97uy2qugusxzzxt8340eq4pekzxol9a2gkxk66mgo3lp82do6lp6s56w3x7p5sdppy2xp0i00foyyf3yp8aniwun69ymijftj7enf74hyv8n5ac4bra1a7dxrefnfin3qlsellvtns6cpkc5ycdsmgmdnjacal4528fi97mzdqhejkpuw02uen371kkmst1xh5fsrbenuh1j6geigkbrtkm9wq0yxwmyj065kepxcv8sitc70uk897b0l21zdgkj1iz1by6ksyyd8av7g6cqr7gcjwc8mt3nyjtrs26oscyzc87l7yaflo3h9hljwxf1wmeaz9g0tctkdwbhm4t60oaw5s0byvkzohszzf7s25eqxt6mmjegro314u894e5i2jgrjbrp13yxkoitdcu5aj0v0mssy2rb4dl9qfvtehx3x4c0l5hd9vdrqm89u4jgwlvg5utkz575woryqtt47ofswebwb0w7t8yxjct4hf9xdfot0lsrrix2xxey5ja0e9mptktgz66wxmh7wpng58va6ep2sohx2c06wvmeo61itkzfof07k65iy7hc16n0p6tgguajhbtrpnl1m6mtmrygm1c2d7p6icnxolw6trw2eeona4xha0shh8qrqy46fsb68leajj6g3ze4g82g96zwsnbd7bhml6jl1r71lmm2ke21caasqpn3wb9d88zrwu5vydqurfpkbrhb01x7t144uhnslm9nrpaqgtuiz27cjfn61lkq59ehe38ewm35cjl8y6mhntqikljp45n4buez9udipy1rx9u86wd2qw1x4nfb4k2hk85ryhy4x4717i5zjzmn8vkq2rmmfozgf3ni5h5pc0ofl76avuuaok02c41lodginon9u87z4oj1cqy68wzgp50xdui2kwcilmh0ue1axldfljjny1aflhy0fy11uo8mduojzzo18chizzks2gaprybvg3uaswqkpnl065gwn5gz9ygfu9m5eyer1slpc5ywwminxfd2tdqy4fscpgw0wmb6amw2wnn3hnhzbbbht0xwl2f470mxu1ifkot4uu9fitxqtcbrjpxfa4h2d9hpz1lzoq2hnedqv5qhxfvuq4lsk6tloiz8qhtpx1hivjyxyh5ikrl0i9iugv3ndguu8m3juy64dzaobhkqmbtl8h5hbxwxuy6c1twskz5iaqrsebv9ngk0h6bmt5ke4msolzhss4drr2c8fm75pjuu5qpym623bul4duez0l2ir6et2sv6ng8rxkf5qmjvc7f3lzxdk1zs89quefgu4zjpolkd13bcekqpt0pynkievrr0xlbju630qjrrtvbu7ha1q9w2hid21ztd5f2qi867p4yi6d72uxp5lywway0s8jgpktjtjuuoayemwglzsirsh1s77qtg1644m75mo54iskzyzqy2wr6ff07qt60r89jxb8fomcpsip5e86h6ema4w57ctm00zdp3awppmii1ottb0usouqeftskh9jl8snpk8wsukf17ywbs4wnbg2gl72o237jnfs8ltbtbzwju7ols9hhr1uu58x4w1ylxlam88p540evxabhzma61w4a459mzoc1jshmpdnep8qrj9vlxkqjsfvsgb2eyfree0xhzxqd2sy2d5t1n6097hyerpiq5taxkn5qhj325ursbgn2dhsslraamld9xlbeh4ji9ldvo0kn2351bpo92rglfw3g337h2ydgdjxhb4ll5j212yn2qjnpwmb9lss1zp46xwomtx7tsuj374xohrrznattqt6vxw8pa02w3f0x6uslzk74n3dheeiutuhe7x5amgewzqe9zt8med3rl2d5khlkn2puyj74fixe0m8mrfsw84dgky1w77f3yfmyte3aqzsf4vvg34l1ega4r5ck21r0amaq14b9xio19jlisv25bf9x7g 00:07:49.787 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:49.787 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:49.787 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:49.787 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:49.787 [2024-11-19 16:02:56.436405] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:49.787 [2024-11-19 16:02:56.436493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73129 ] 00:07:49.787 { 00:07:49.787 "subsystems": [ 00:07:49.787 { 00:07:49.787 "subsystem": "bdev", 00:07:49.787 "config": [ 00:07:49.787 { 00:07:49.787 "params": { 00:07:49.787 "trtype": "pcie", 00:07:49.787 "traddr": "0000:00:10.0", 00:07:49.787 "name": "Nvme0" 00:07:49.787 }, 00:07:49.787 "method": "bdev_nvme_attach_controller" 00:07:49.787 }, 00:07:49.787 { 00:07:49.787 "method": "bdev_wait_for_examine" 00:07:49.787 } 00:07:49.787 ] 00:07:49.787 } 00:07:49.787 ] 00:07:49.787 } 00:07:50.046 [2024-11-19 16:02:56.583913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.047 [2024-11-19 16:02:56.605748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.047 [2024-11-19 16:02:56.632999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.047  [2024-11-19T16:02:57.020Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:50.305 00:07:50.305 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:50.305 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:50.305 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:50.305 16:02:56 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:50.305 [2024-11-19 16:02:56.886039] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:50.305 [2024-11-19 16:02:56.886302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73137 ] 00:07:50.305 { 00:07:50.305 "subsystems": [ 00:07:50.305 { 00:07:50.305 "subsystem": "bdev", 00:07:50.305 "config": [ 00:07:50.305 { 00:07:50.305 "params": { 00:07:50.305 "trtype": "pcie", 00:07:50.305 "traddr": "0000:00:10.0", 00:07:50.305 "name": "Nvme0" 00:07:50.305 }, 00:07:50.305 "method": "bdev_nvme_attach_controller" 00:07:50.305 }, 00:07:50.305 { 00:07:50.305 "method": "bdev_wait_for_examine" 00:07:50.305 } 00:07:50.305 ] 00:07:50.305 } 00:07:50.305 ] 00:07:50.305 } 00:07:50.564 [2024-11-19 16:02:57.032273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.564 [2024-11-19 16:02:57.050390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.564 [2024-11-19 16:02:57.076889] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.564  [2024-11-19T16:02:57.279Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:50.564 00:07:50.824 16:02:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:50.824 ************************************ 00:07:50.824 END TEST dd_rw_offset 00:07:50.824 ************************************ 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ ukl5pa1570f03mft6vyya8xgm2d2oejwqk0kxt80izapjwmf3v39e50sn7q1d78evtpj7yd98bzq8llnik9n940np7lg3p8rxjdu0u0o6fu63hmlhcdf70gpdprzfpivpdpx9uxxag9pak52c6jo4ooykaobmctoc2ft5pvpw5tg53wzgamzffxyx4nv201c2u6f0dzjo50i4zy9lc3cyavsog4e309ghbr7pzln38xubxenhzqwuay9fk39pdo5k5kenc74gsuy49th1nojlfdcysocqe85cw5wluc5klrrqqr032scfqbrprleu0v6u6am2jcr90p5n11hbw8ntdopjqlegcnr4gojkgki6hayp569o9n8f1ew5zigabmh72mmypubqdjdyd4umu4oysalzze9k7618rm0omqdryzo08b8v04hf1w95wedgulj7xqryt49uwxecnngbxe9iqum5idn3e1buj9aquipeqc42ti1rb3wouyrn4jucg2tlqbyki5lhwv55875skbd6vpdaeprf7xdont06gfxei782h6noo1wfzvu1w374s2fqax7phkuzc2icz5gvw7gaam7490ox9241gqftnbamq7sucgopjx6qyl31cc5nsmh6quby40nhmsxez75r2vx06jgb21v68ujx4fphjohrno02ruomq94lj613eqxzw44mp4u9z46h7ztbrzh9h8rzj23uatlzqpat10zs3d2045n946xppv4role80olthhhyfi6jpajdzds422gab7nofa2f6zaqtbvqijddehxkbbqhyzmhqs1a70368lcpy6h98bsbhhbk0xnle90phf5m4mw0px99it132b2rx9nxoyeh5k42gyx4bzjj1do3cteu48w726mplef25ponprf1ecpomo2iq7tl1u4mczttrhx4sti5rsaoh3ej8bvr8y5wb11bvlrzwozgq8oig6izwhb60krddmmk1s67eyjtgx0f97vvvz4nlwjhnju5ndp5nqrqh2vubukzukniq1u339wtzxirz8c2t2yagtl772gdixmwyqufmrkpp17k6mzq80qjfru4vnmq32dgoc5wqkj3e5ixxgoaxcoz3ria56lqdxocj60cmmraovznd4oavcgmfe9cvru3npy13ao1cqun8lja9a810nwaotay694ot2t2fuh8yb35hwwmczyooytiw91rx3lc00uzy74a8dyo2hb5mb6f42xhze1giuza8txbzp8uh60q8p1r6cde66eu9gd51ip679r8av3d07t8sjw36vdck2owa84kgk14t3xt03nw7e0td56l7a1m066tkuvvlisosnmmcog9dts24bq1ebo5egvkqf6nwe8fearl9eor8kqznv0njx28qv56opj2372ug0gcaluv7h67z6uh8bjsp862olixvborzt5lr2jcuxqqse8qnwplsjlsrvxgimc726emrt2svc42pqv8pepyl1nhfgc1qkb0e07zu0qlpj2zalp3n18fzk31iu4xsb4odxlvddzosl3v7obzgar1udmgi2clkcpr9pamrintt34cv1m2i49luk855ttazd9g65jnvwk2zxcv8op5wvhi08c3htbiqko8zjyuuvdygoycrm1spcthq60ka01qapy625z7dl4y7snfobgt5oop3wqyrnsj3hr61grmegyins615xpuu2is6q200w81kr6uk6mn2rbaluq164gdwxr15fgsdlsnlwsoqskxt8u0paoqj8eoaind88jgni6uukqi22uabln3q1n1em20fa74pnarbgk4bnu95b0opzb5kj9bwvt11btz2473d3v786x4046a862xtj4bjy0ff5fv02wkm8pcoeq73gqtjax213kks2ocoy9jtmet3cfuqtzasq0f2qwv458ruudkixua1nlvcujjzg0zbmoe1ssi75qqpc0hxz2c1shggo7usw9bwtd3gdgwatbbin4d99ai4x74bvfk9yjh2e0gomdox76kjgv7su2vn6y2zyj9l11czscn2pco2uhs15d4mqcocylili4im319w18lpw8apyg1577g5cjb871f5vix8kmjil7dkt70u8foxb97uy2qugusxzzxt8340eq4pekzxol9a2gkxk66mgo3lp82do6lp6s56w3x7p5sdppy2xp0i00foyyf3yp8aniwun69ymijftj7enf74hyv8n5ac4bra1a7dxrefnfin3qlsellvtns6cpkc5ycdsmgmdnjacal4528fi97mzdqhejkpuw02uen371kkmst1xh5fsrbenuh1j6geigkbrtkm9wq0yxwmyj065kepxcv8sitc70uk897b0l21zdgkj1iz1by6ksyyd8av7g6cqr7gcjwc8mt3nyjtrs26oscyzc87l7yaflo3h9hljwxf1wmeaz9g0tctkdwbhm4t60oaw5s0byvkzohszzf7s25eqxt6mmjegro314u894e5i2jgrjbrp13yxkoitdcu5aj0v0mssy2rb4dl9qfvtehx3x4c0l5hd9vdrqm89u4jgwlvg5utkz575woryqtt47ofswebwb0w7t8yxjct4hf9xdfot0lsrrix2xxey5ja0e9mptktgz66wxmh7wpng58va6ep2sohx2c06wvmeo61itkzfof07k65iy7hc16n0p6tgguajhbtrpnl1m6mtmrygm1c2d7p6icnxolw6trw2eeona4xha0shh8qrqy46fsb68leajj6g3ze4g82g96zwsnbd7bhml6jl1r71lmm2ke21caasqpn3wb9d88zrwu5vydqurfpkbrhb01x7t144uhnslm9nrpaqgtuiz27cjfn61lkq59ehe38ewm35cjl8y6mhntqikljp45n4buez9udipy1rx9u86wd2qw1x4nfb4k2hk85ryhy4x4717i5zjzmn8vkq2rmmfozgf3ni5h5pc0ofl76avuuaok02c41lodginon9u87z4oj1cqy68wzgp50xdui2kwcilmh0ue1axldfljjny1aflhy0fy11uo8mduojzzo18chizzks2gaprybvg3uaswqkpnl065gwn5gz9ygfu9m5eyer1slpc5ywwminxfd2tdqy4fscpgw0wmb6amw2wnn3hnhzbbbht0xwl2f470mxu1ifkot4uu9fitxqtcbrjpxfa4h2d9hpz1lzoq2hnedqv5qhxfvuq4lsk6tloiz8qhtpx1hivjyxyh5ikrl0i9iugv3ndguu8m3juy64dzaobhkqmbtl8h5hbxwxuy6c1twskz5iaqrsebv9ngk0h6bmt5ke4msolzhss4drr2c8fm75pjuu5qpym623bul4duez0l2ir6et2sv6ng8rxkf5qmjvc7f3lzxdk1zs89quefgu4zjpolkd13bcekqpt0pynkievrr0xlbju630qjrrtvbu7ha1q9w2hid21ztd5f2qi867p4yi6d72uxp5lywway0s8jgpktjtjuuoayemwglzsirsh1s77qtg1644m75mo54iskzyzqy2wr6ff07qt60r89jxb8fomcpsip5e86h6ema4w57ctm00zdp3awppmii1ottb0usouqeftskh9jl8snpk8wsukf17ywbs4wnbg2gl72o237jnfs8ltbtbzwju7ols9hhr1uu58x4w1ylxlam88p540evxabhzma61w4a459mzoc1jshmpdnep8qrj9vlxkqjsfvsgb2eyfree0xhzxqd2sy2d5t1n6097hyerpiq5taxkn5qhj325ursbgn2dhsslraamld9xlbeh4ji9ldvo0kn2351bpo92rglfw3g337h2ydgdjxhb4ll5j212yn2qjnpwmb9lss1zp46xwomtx7tsuj374xohrrznattqt6vxw8pa02w3f0x6uslzk74n3dheeiutuhe7x5amgewzqe9zt8med3rl2d5khlkn2puyj74fixe0m8mrfsw84dgky1w77f3yfmyte3aqzsf4vvg34l1ega4r5ck21r0amaq14b9xio19jlisv25bf9x7g == \u\k\l\5\p\a\1\5\7\0\f\0\3\m\f\t\6\v\y\y\a\8\x\g\m\2\d\2\o\e\j\w\q\k\0\k\x\t\8\0\i\z\a\p\j\w\m\f\3\v\3\9\e\5\0\s\n\7\q\1\d\7\8\e\v\t\p\j\7\y\d\9\8\b\z\q\8\l\l\n\i\k\9\n\9\4\0\n\p\7\l\g\3\p\8\r\x\j\d\u\0\u\0\o\6\f\u\6\3\h\m\l\h\c\d\f\7\0\g\p\d\p\r\z\f\p\i\v\p\d\p\x\9\u\x\x\a\g\9\p\a\k\5\2\c\6\j\o\4\o\o\y\k\a\o\b\m\c\t\o\c\2\f\t\5\p\v\p\w\5\t\g\5\3\w\z\g\a\m\z\f\f\x\y\x\4\n\v\2\0\1\c\2\u\6\f\0\d\z\j\o\5\0\i\4\z\y\9\l\c\3\c\y\a\v\s\o\g\4\e\3\0\9\g\h\b\r\7\p\z\l\n\3\8\x\u\b\x\e\n\h\z\q\w\u\a\y\9\f\k\3\9\p\d\o\5\k\5\k\e\n\c\7\4\g\s\u\y\4\9\t\h\1\n\o\j\l\f\d\c\y\s\o\c\q\e\8\5\c\w\5\w\l\u\c\5\k\l\r\r\q\q\r\0\3\2\s\c\f\q\b\r\p\r\l\e\u\0\v\6\u\6\a\m\2\j\c\r\9\0\p\5\n\1\1\h\b\w\8\n\t\d\o\p\j\q\l\e\g\c\n\r\4\g\o\j\k\g\k\i\6\h\a\y\p\5\6\9\o\9\n\8\f\1\e\w\5\z\i\g\a\b\m\h\7\2\m\m\y\p\u\b\q\d\j\d\y\d\4\u\m\u\4\o\y\s\a\l\z\z\e\9\k\7\6\1\8\r\m\0\o\m\q\d\r\y\z\o\0\8\b\8\v\0\4\h\f\1\w\9\5\w\e\d\g\u\l\j\7\x\q\r\y\t\4\9\u\w\x\e\c\n\n\g\b\x\e\9\i\q\u\m\5\i\d\n\3\e\1\b\u\j\9\a\q\u\i\p\e\q\c\4\2\t\i\1\r\b\3\w\o\u\y\r\n\4\j\u\c\g\2\t\l\q\b\y\k\i\5\l\h\w\v\5\5\8\7\5\s\k\b\d\6\v\p\d\a\e\p\r\f\7\x\d\o\n\t\0\6\g\f\x\e\i\7\8\2\h\6\n\o\o\1\w\f\z\v\u\1\w\3\7\4\s\2\f\q\a\x\7\p\h\k\u\z\c\2\i\c\z\5\g\v\w\7\g\a\a\m\7\4\9\0\o\x\9\2\4\1\g\q\f\t\n\b\a\m\q\7\s\u\c\g\o\p\j\x\6\q\y\l\3\1\c\c\5\n\s\m\h\6\q\u\b\y\4\0\n\h\m\s\x\e\z\7\5\r\2\v\x\0\6\j\g\b\2\1\v\6\8\u\j\x\4\f\p\h\j\o\h\r\n\o\0\2\r\u\o\m\q\9\4\l\j\6\1\3\e\q\x\z\w\4\4\m\p\4\u\9\z\4\6\h\7\z\t\b\r\z\h\9\h\8\r\z\j\2\3\u\a\t\l\z\q\p\a\t\1\0\z\s\3\d\2\0\4\5\n\9\4\6\x\p\p\v\4\r\o\l\e\8\0\o\l\t\h\h\h\y\f\i\6\j\p\a\j\d\z\d\s\4\2\2\g\a\b\7\n\o\f\a\2\f\6\z\a\q\t\b\v\q\i\j\d\d\e\h\x\k\b\b\q\h\y\z\m\h\q\s\1\a\7\0\3\6\8\l\c\p\y\6\h\9\8\b\s\b\h\h\b\k\0\x\n\l\e\9\0\p\h\f\5\m\4\m\w\0\p\x\9\9\i\t\1\3\2\b\2\r\x\9\n\x\o\y\e\h\5\k\4\2\g\y\x\4\b\z\j\j\1\d\o\3\c\t\e\u\4\8\w\7\2\6\m\p\l\e\f\2\5\p\o\n\p\r\f\1\e\c\p\o\m\o\2\i\q\7\t\l\1\u\4\m\c\z\t\t\r\h\x\4\s\t\i\5\r\s\a\o\h\3\e\j\8\b\v\r\8\y\5\w\b\1\1\b\v\l\r\z\w\o\z\g\q\8\o\i\g\6\i\z\w\h\b\6\0\k\r\d\d\m\m\k\1\s\6\7\e\y\j\t\g\x\0\f\9\7\v\v\v\z\4\n\l\w\j\h\n\j\u\5\n\d\p\5\n\q\r\q\h\2\v\u\b\u\k\z\u\k\n\i\q\1\u\3\3\9\w\t\z\x\i\r\z\8\c\2\t\2\y\a\g\t\l\7\7\2\g\d\i\x\m\w\y\q\u\f\m\r\k\p\p\1\7\k\6\m\z\q\8\0\q\j\f\r\u\4\v\n\m\q\3\2\d\g\o\c\5\w\q\k\j\3\e\5\i\x\x\g\o\a\x\c\o\z\3\r\i\a\5\6\l\q\d\x\o\c\j\6\0\c\m\m\r\a\o\v\z\n\d\4\o\a\v\c\g\m\f\e\9\c\v\r\u\3\n\p\y\1\3\a\o\1\c\q\u\n\8\l\j\a\9\a\8\1\0\n\w\a\o\t\a\y\6\9\4\o\t\2\t\2\f\u\h\8\y\b\3\5\h\w\w\m\c\z\y\o\o\y\t\i\w\9\1\r\x\3\l\c\0\0\u\z\y\7\4\a\8\d\y\o\2\h\b\5\m\b\6\f\4\2\x\h\z\e\1\g\i\u\z\a\8\t\x\b\z\p\8\u\h\6\0\q\8\p\1\r\6\c\d\e\6\6\e\u\9\g\d\5\1\i\p\6\7\9\r\8\a\v\3\d\0\7\t\8\s\j\w\3\6\v\d\c\k\2\o\w\a\8\4\k\g\k\1\4\t\3\x\t\0\3\n\w\7\e\0\t\d\5\6\l\7\a\1\m\0\6\6\t\k\u\v\v\l\i\s\o\s\n\m\m\c\o\g\9\d\t\s\2\4\b\q\1\e\b\o\5\e\g\v\k\q\f\6\n\w\e\8\f\e\a\r\l\9\e\o\r\8\k\q\z\n\v\0\n\j\x\2\8\q\v\5\6\o\p\j\2\3\7\2\u\g\0\g\c\a\l\u\v\7\h\6\7\z\6\u\h\8\b\j\s\p\8\6\2\o\l\i\x\v\b\o\r\z\t\5\l\r\2\j\c\u\x\q\q\s\e\8\q\n\w\p\l\s\j\l\s\r\v\x\g\i\m\c\7\2\6\e\m\r\t\2\s\v\c\4\2\p\q\v\8\p\e\p\y\l\1\n\h\f\g\c\1\q\k\b\0\e\0\7\z\u\0\q\l\p\j\2\z\a\l\p\3\n\1\8\f\z\k\3\1\i\u\4\x\s\b\4\o\d\x\l\v\d\d\z\o\s\l\3\v\7\o\b\z\g\a\r\1\u\d\m\g\i\2\c\l\k\c\p\r\9\p\a\m\r\i\n\t\t\3\4\c\v\1\m\2\i\4\9\l\u\k\8\5\5\t\t\a\z\d\9\g\6\5\j\n\v\w\k\2\z\x\c\v\8\o\p\5\w\v\h\i\0\8\c\3\h\t\b\i\q\k\o\8\z\j\y\u\u\v\d\y\g\o\y\c\r\m\1\s\p\c\t\h\q\6\0\k\a\0\1\q\a\p\y\6\2\5\z\7\d\l\4\y\7\s\n\f\o\b\g\t\5\o\o\p\3\w\q\y\r\n\s\j\3\h\r\6\1\g\r\m\e\g\y\i\n\s\6\1\5\x\p\u\u\2\i\s\6\q\2\0\0\w\8\1\k\r\6\u\k\6\m\n\2\r\b\a\l\u\q\1\6\4\g\d\w\x\r\1\5\f\g\s\d\l\s\n\l\w\s\o\q\s\k\x\t\8\u\0\p\a\o\q\j\8\e\o\a\i\n\d\8\8\j\g\n\i\6\u\u\k\q\i\2\2\u\a\b\l\n\3\q\1\n\1\e\m\2\0\f\a\7\4\p\n\a\r\b\g\k\4\b\n\u\9\5\b\0\o\p\z\b\5\k\j\9\b\w\v\t\1\1\b\t\z\2\4\7\3\d\3\v\7\8\6\x\4\0\4\6\a\8\6\2\x\t\j\4\b\j\y\0\f\f\5\f\v\0\2\w\k\m\8\p\c\o\e\q\7\3\g\q\t\j\a\x\2\1\3\k\k\s\2\o\c\o\y\9\j\t\m\e\t\3\c\f\u\q\t\z\a\s\q\0\f\2\q\w\v\4\5\8\r\u\u\d\k\i\x\u\a\1\n\l\v\c\u\j\j\z\g\0\z\b\m\o\e\1\s\s\i\7\5\q\q\p\c\0\h\x\z\2\c\1\s\h\g\g\o\7\u\s\w\9\b\w\t\d\3\g\d\g\w\a\t\b\b\i\n\4\d\9\9\a\i\4\x\7\4\b\v\f\k\9\y\j\h\2\e\0\g\o\m\d\o\x\7\6\k\j\g\v\7\s\u\2\v\n\6\y\2\z\y\j\9\l\1\1\c\z\s\c\n\2\p\c\o\2\u\h\s\1\5\d\4\m\q\c\o\c\y\l\i\l\i\4\i\m\3\1\9\w\1\8\l\p\w\8\a\p\y\g\1\5\7\7\g\5\c\j\b\8\7\1\f\5\v\i\x\8\k\m\j\i\l\7\d\k\t\7\0\u\8\f\o\x\b\9\7\u\y\2\q\u\g\u\s\x\z\z\x\t\8\3\4\0\e\q\4\p\e\k\z\x\o\l\9\a\2\g\k\x\k\6\6\m\g\o\3\l\p\8\2\d\o\6\l\p\6\s\5\6\w\3\x\7\p\5\s\d\p\p\y\2\x\p\0\i\0\0\f\o\y\y\f\3\y\p\8\a\n\i\w\u\n\6\9\y\m\i\j\f\t\j\7\e\n\f\7\4\h\y\v\8\n\5\a\c\4\b\r\a\1\a\7\d\x\r\e\f\n\f\i\n\3\q\l\s\e\l\l\v\t\n\s\6\c\p\k\c\5\y\c\d\s\m\g\m\d\n\j\a\c\a\l\4\5\2\8\f\i\9\7\m\z\d\q\h\e\j\k\p\u\w\0\2\u\e\n\3\7\1\k\k\m\s\t\1\x\h\5\f\s\r\b\e\n\u\h\1\j\6\g\e\i\g\k\b\r\t\k\m\9\w\q\0\y\x\w\m\y\j\0\6\5\k\e\p\x\c\v\8\s\i\t\c\7\0\u\k\8\9\7\b\0\l\2\1\z\d\g\k\j\1\i\z\1\b\y\6\k\s\y\y\d\8\a\v\7\g\6\c\q\r\7\g\c\j\w\c\8\m\t\3\n\y\j\t\r\s\2\6\o\s\c\y\z\c\8\7\l\7\y\a\f\l\o\3\h\9\h\l\j\w\x\f\1\w\m\e\a\z\9\g\0\t\c\t\k\d\w\b\h\m\4\t\6\0\o\a\w\5\s\0\b\y\v\k\z\o\h\s\z\z\f\7\s\2\5\e\q\x\t\6\m\m\j\e\g\r\o\3\1\4\u\8\9\4\e\5\i\2\j\g\r\j\b\r\p\1\3\y\x\k\o\i\t\d\c\u\5\a\j\0\v\0\m\s\s\y\2\r\b\4\d\l\9\q\f\v\t\e\h\x\3\x\4\c\0\l\5\h\d\9\v\d\r\q\m\8\9\u\4\j\g\w\l\v\g\5\u\t\k\z\5\7\5\w\o\r\y\q\t\t\4\7\o\f\s\w\e\b\w\b\0\w\7\t\8\y\x\j\c\t\4\h\f\9\x\d\f\o\t\0\l\s\r\r\i\x\2\x\x\e\y\5\j\a\0\e\9\m\p\t\k\t\g\z\6\6\w\x\m\h\7\w\p\n\g\5\8\v\a\6\e\p\2\s\o\h\x\2\c\0\6\w\v\m\e\o\6\1\i\t\k\z\f\o\f\0\7\k\6\5\i\y\7\h\c\1\6\n\0\p\6\t\g\g\u\a\j\h\b\t\r\p\n\l\1\m\6\m\t\m\r\y\g\m\1\c\2\d\7\p\6\i\c\n\x\o\l\w\6\t\r\w\2\e\e\o\n\a\4\x\h\a\0\s\h\h\8\q\r\q\y\4\6\f\s\b\6\8\l\e\a\j\j\6\g\3\z\e\4\g\8\2\g\9\6\z\w\s\n\b\d\7\b\h\m\l\6\j\l\1\r\7\1\l\m\m\2\k\e\2\1\c\a\a\s\q\p\n\3\w\b\9\d\8\8\z\r\w\u\5\v\y\d\q\u\r\f\p\k\b\r\h\b\0\1\x\7\t\1\4\4\u\h\n\s\l\m\9\n\r\p\a\q\g\t\u\i\z\2\7\c\j\f\n\6\1\l\k\q\5\9\e\h\e\3\8\e\w\m\3\5\c\j\l\8\y\6\m\h\n\t\q\i\k\l\j\p\4\5\n\4\b\u\e\z\9\u\d\i\p\y\1\r\x\9\u\8\6\w\d\2\q\w\1\x\4\n\f\b\4\k\2\h\k\8\5\r\y\h\y\4\x\4\7\1\7\i\5\z\j\z\m\n\8\v\k\q\2\r\m\m\f\o\z\g\f\3\n\i\5\h\5\p\c\0\o\f\l\7\6\a\v\u\u\a\o\k\0\2\c\4\1\l\o\d\g\i\n\o\n\9\u\8\7\z\4\o\j\1\c\q\y\6\8\w\z\g\p\5\0\x\d\u\i\2\k\w\c\i\l\m\h\0\u\e\1\a\x\l\d\f\l\j\j\n\y\1\a\f\l\h\y\0\f\y\1\1\u\o\8\m\d\u\o\j\z\z\o\1\8\c\h\i\z\z\k\s\2\g\a\p\r\y\b\v\g\3\u\a\s\w\q\k\p\n\l\0\6\5\g\w\n\5\g\z\9\y\g\f\u\9\m\5\e\y\e\r\1\s\l\p\c\5\y\w\w\m\i\n\x\f\d\2\t\d\q\y\4\f\s\c\p\g\w\0\w\m\b\6\a\m\w\2\w\n\n\3\h\n\h\z\b\b\b\h\t\0\x\w\l\2\f\4\7\0\m\x\u\1\i\f\k\o\t\4\u\u\9\f\i\t\x\q\t\c\b\r\j\p\x\f\a\4\h\2\d\9\h\p\z\1\l\z\o\q\2\h\n\e\d\q\v\5\q\h\x\f\v\u\q\4\l\s\k\6\t\l\o\i\z\8\q\h\t\p\x\1\h\i\v\j\y\x\y\h\5\i\k\r\l\0\i\9\i\u\g\v\3\n\d\g\u\u\8\m\3\j\u\y\6\4\d\z\a\o\b\h\k\q\m\b\t\l\8\h\5\h\b\x\w\x\u\y\6\c\1\t\w\s\k\z\5\i\a\q\r\s\e\b\v\9\n\g\k\0\h\6\b\m\t\5\k\e\4\m\s\o\l\z\h\s\s\4\d\r\r\2\c\8\f\m\7\5\p\j\u\u\5\q\p\y\m\6\2\3\b\u\l\4\d\u\e\z\0\l\2\i\r\6\e\t\2\s\v\6\n\g\8\r\x\k\f\5\q\m\j\v\c\7\f\3\l\z\x\d\k\1\z\s\8\9\q\u\e\f\g\u\4\z\j\p\o\l\k\d\1\3\b\c\e\k\q\p\t\0\p\y\n\k\i\e\v\r\r\0\x\l\b\j\u\6\3\0\q\j\r\r\t\v\b\u\7\h\a\1\q\9\w\2\h\i\d\2\1\z\t\d\5\f\2\q\i\8\6\7\p\4\y\i\6\d\7\2\u\x\p\5\l\y\w\w\a\y\0\s\8\j\g\p\k\t\j\t\j\u\u\o\a\y\e\m\w\g\l\z\s\i\r\s\h\1\s\7\7\q\t\g\1\6\4\4\m\7\5\m\o\5\4\i\s\k\z\y\z\q\y\2\w\r\6\f\f\0\7\q\t\6\0\r\8\9\j\x\b\8\f\o\m\c\p\s\i\p\5\e\8\6\h\6\e\m\a\4\w\5\7\c\t\m\0\0\z\d\p\3\a\w\p\p\m\i\i\1\o\t\t\b\0\u\s\o\u\q\e\f\t\s\k\h\9\j\l\8\s\n\p\k\8\w\s\u\k\f\1\7\y\w\b\s\4\w\n\b\g\2\g\l\7\2\o\2\3\7\j\n\f\s\8\l\t\b\t\b\z\w\j\u\7\o\l\s\9\h\h\r\1\u\u\5\8\x\4\w\1\y\l\x\l\a\m\8\8\p\5\4\0\e\v\x\a\b\h\z\m\a\6\1\w\4\a\4\5\9\m\z\o\c\1\j\s\h\m\p\d\n\e\p\8\q\r\j\9\v\l\x\k\q\j\s\f\v\s\g\b\2\e\y\f\r\e\e\0\x\h\z\x\q\d\2\s\y\2\d\5\t\1\n\6\0\9\7\h\y\e\r\p\i\q\5\t\a\x\k\n\5\q\h\j\3\2\5\u\r\s\b\g\n\2\d\h\s\s\l\r\a\a\m\l\d\9\x\l\b\e\h\4\j\i\9\l\d\v\o\0\k\n\2\3\5\1\b\p\o\9\2\r\g\l\f\w\3\g\3\3\7\h\2\y\d\g\d\j\x\h\b\4\l\l\5\j\2\1\2\y\n\2\q\j\n\p\w\m\b\9\l\s\s\1\z\p\4\6\x\w\o\m\t\x\7\t\s\u\j\3\7\4\x\o\h\r\r\z\n\a\t\t\q\t\6\v\x\w\8\p\a\0\2\w\3\f\0\x\6\u\s\l\z\k\7\4\n\3\d\h\e\e\i\u\t\u\h\e\7\x\5\a\m\g\e\w\z\q\e\9\z\t\8\m\e\d\3\r\l\2\d\5\k\h\l\k\n\2\p\u\y\j\7\4\f\i\x\e\0\m\8\m\r\f\s\w\8\4\d\g\k\y\1\w\7\7\f\3\y\f\m\y\t\e\3\a\q\z\s\f\4\v\v\g\3\4\l\1\e\g\a\4\r\5\c\k\2\1\r\0\a\m\a\q\1\4\b\9\x\i\o\1\9\j\l\i\s\v\2\5\b\f\9\x\7\g ]] 00:07:50.825 00:07:50.825 real 0m0.949s 00:07:50.825 user 0m0.653s 00:07:50.825 sys 0m0.378s 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:50.825 16:02:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.825 { 00:07:50.825 "subsystems": [ 00:07:50.825 { 00:07:50.825 "subsystem": "bdev", 00:07:50.825 "config": [ 00:07:50.825 { 00:07:50.825 "params": { 00:07:50.825 "trtype": "pcie", 00:07:50.825 "traddr": "0000:00:10.0", 00:07:50.825 "name": "Nvme0" 00:07:50.825 }, 00:07:50.825 "method": "bdev_nvme_attach_controller" 00:07:50.826 }, 00:07:50.826 { 00:07:50.826 "method": "bdev_wait_for_examine" 00:07:50.826 } 00:07:50.826 ] 00:07:50.826 } 00:07:50.826 ] 00:07:50.826 } 00:07:50.826 [2024-11-19 16:02:57.372086] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:50.826 [2024-11-19 16:02:57.372176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73166 ] 00:07:50.826 [2024-11-19 16:02:57.518889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.084 [2024-11-19 16:02:57.537867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.084 [2024-11-19 16:02:57.564920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.084  [2024-11-19T16:02:57.799Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:51.084 00:07:51.084 16:02:57 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.084 ************************************ 00:07:51.084 END TEST spdk_dd_basic_rw 00:07:51.084 ************************************ 00:07:51.084 00:07:51.084 real 0m13.776s 00:07:51.084 user 0m9.916s 00:07:51.084 sys 0m4.368s 00:07:51.084 16:02:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.084 16:02:57 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:51.343 16:02:57 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:51.343 16:02:57 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.343 16:02:57 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.343 16:02:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:51.343 ************************************ 00:07:51.343 START TEST spdk_dd_posix 00:07:51.343 ************************************ 00:07:51.343 16:02:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:51.343 * Looking for test storage... 00:07:51.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:51.343 16:02:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:51.343 16:02:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:07:51.343 16:02:57 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:51.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.343 --rc genhtml_branch_coverage=1 00:07:51.343 --rc genhtml_function_coverage=1 00:07:51.343 --rc genhtml_legend=1 00:07:51.343 --rc geninfo_all_blocks=1 00:07:51.343 --rc geninfo_unexecuted_blocks=1 00:07:51.343 00:07:51.343 ' 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:51.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.343 --rc genhtml_branch_coverage=1 00:07:51.343 --rc genhtml_function_coverage=1 00:07:51.343 --rc genhtml_legend=1 00:07:51.343 --rc geninfo_all_blocks=1 00:07:51.343 --rc geninfo_unexecuted_blocks=1 00:07:51.343 00:07:51.343 ' 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:51.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.343 --rc genhtml_branch_coverage=1 00:07:51.343 --rc genhtml_function_coverage=1 00:07:51.343 --rc genhtml_legend=1 00:07:51.343 --rc geninfo_all_blocks=1 00:07:51.343 --rc geninfo_unexecuted_blocks=1 00:07:51.343 00:07:51.343 ' 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:51.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.343 --rc genhtml_branch_coverage=1 00:07:51.343 --rc genhtml_function_coverage=1 00:07:51.343 --rc genhtml_legend=1 00:07:51.343 --rc geninfo_all_blocks=1 00:07:51.343 --rc geninfo_unexecuted_blocks=1 00:07:51.343 00:07:51.343 ' 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:51.343 * First test run, liburing in use 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.343 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:51.344 ************************************ 00:07:51.344 START TEST dd_flag_append 00:07:51.344 ************************************ 00:07:51.344 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:07:51.344 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:51.344 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:51.344 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:51.344 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:51.344 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:51.344 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=bb4xsk7tfakm5xmhqt2lz9z9gwprq1n3 00:07:51.602 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:51.602 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:51.602 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:51.602 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=omiqtawz1u2b5fhq8hjrvynv35yjjljr 00:07:51.602 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s bb4xsk7tfakm5xmhqt2lz9z9gwprq1n3 00:07:51.602 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s omiqtawz1u2b5fhq8hjrvynv35yjjljr 00:07:51.602 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:51.602 [2024-11-19 16:02:58.115454] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:51.602 [2024-11-19 16:02:58.116142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73233 ] 00:07:51.602 [2024-11-19 16:02:58.265465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.602 [2024-11-19 16:02:58.286015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.602 [2024-11-19 16:02:58.313892] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.861  [2024-11-19T16:02:58.576Z] Copying: 32/32 [B] (average 31 kBps) 00:07:51.861 00:07:51.861 ************************************ 00:07:51.861 END TEST dd_flag_append 00:07:51.861 ************************************ 00:07:51.861 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ omiqtawz1u2b5fhq8hjrvynv35yjjljrbb4xsk7tfakm5xmhqt2lz9z9gwprq1n3 == \o\m\i\q\t\a\w\z\1\u\2\b\5\f\h\q\8\h\j\r\v\y\n\v\3\5\y\j\j\l\j\r\b\b\4\x\s\k\7\t\f\a\k\m\5\x\m\h\q\t\2\l\z\9\z\9\g\w\p\r\q\1\n\3 ]] 00:07:51.861 00:07:51.861 real 0m0.392s 00:07:51.861 user 0m0.191s 00:07:51.861 sys 0m0.159s 00:07:51.861 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.861 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:51.861 16:02:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:51.861 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:51.861 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.861 16:02:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:51.861 ************************************ 00:07:51.861 START TEST dd_flag_directory 00:07:51.861 ************************************ 00:07:51.861 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:51.862 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:51.862 [2024-11-19 16:02:58.550763] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:51.862 [2024-11-19 16:02:58.550868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73267 ] 00:07:52.120 [2024-11-19 16:02:58.698077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.120 [2024-11-19 16:02:58.715635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.120 [2024-11-19 16:02:58.743118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.120 [2024-11-19 16:02:58.759191] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:52.120 [2024-11-19 16:02:58.759289] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:52.120 [2024-11-19 16:02:58.759338] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.120 [2024-11-19 16:02:58.819876] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.379 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.380 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.380 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.380 16:02:58 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:52.380 [2024-11-19 16:02:58.933926] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:52.380 [2024-11-19 16:02:58.934033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73271 ] 00:07:52.380 [2024-11-19 16:02:59.079601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.648 [2024-11-19 16:02:59.100743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.648 [2024-11-19 16:02:59.127886] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.648 [2024-11-19 16:02:59.141900] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:52.648 [2024-11-19 16:02:59.141950] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:52.648 [2024-11-19 16:02:59.141981] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.648 [2024-11-19 16:02:59.196243] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:52.648 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:07:52.648 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:52.649 00:07:52.649 real 0m0.755s 00:07:52.649 user 0m0.360s 00:07:52.649 sys 0m0.187s 00:07:52.649 ************************************ 00:07:52.649 END TEST dd_flag_directory 00:07:52.649 ************************************ 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:52.649 ************************************ 00:07:52.649 START TEST dd_flag_nofollow 00:07:52.649 ************************************ 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.649 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:52.650 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.650 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.650 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.650 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.650 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.650 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.650 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:52.650 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:52.650 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:52.650 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.913 [2024-11-19 16:02:59.364363] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:52.913 [2024-11-19 16:02:59.364451] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73298 ] 00:07:52.913 [2024-11-19 16:02:59.504985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.913 [2024-11-19 16:02:59.522541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.913 [2024-11-19 16:02:59.548233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.913 [2024-11-19 16:02:59.562216] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:52.913 [2024-11-19 16:02:59.562312] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:52.913 [2024-11-19 16:02:59.562347] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.913 [2024-11-19 16:02:59.619742] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:53.172 16:02:59 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:53.172 [2024-11-19 16:02:59.726750] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:53.172 [2024-11-19 16:02:59.727030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73309 ] 00:07:53.172 [2024-11-19 16:02:59.872503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.431 [2024-11-19 16:02:59.891720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.431 [2024-11-19 16:02:59.917583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.431 [2024-11-19 16:02:59.931679] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:53.431 [2024-11-19 16:02:59.931729] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:53.431 [2024-11-19 16:02:59.931763] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:53.431 [2024-11-19 16:02:59.986638] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:53.431 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:07:53.431 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.431 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:07:53.431 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:07:53.431 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:07:53.431 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.431 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:53.431 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:53.431 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:53.431 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.431 [2024-11-19 16:03:00.109542] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:53.431 [2024-11-19 16:03:00.109789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73311 ] 00:07:53.690 [2024-11-19 16:03:00.257030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.690 [2024-11-19 16:03:00.275703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.690 [2024-11-19 16:03:00.302779] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.690  [2024-11-19T16:03:00.664Z] Copying: 512/512 [B] (average 500 kBps) 00:07:53.949 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ 3dwa7z94utqqgj56ap7pjmjiayb6j5jtg4w1cvjf6ydckdzgrwklmm97istoux91yt8fcuqv92lcuoe0edy6xss91jm49uczz8ogz12wfvfir77c77glo5tnmwzqdyp5em4ir254m83ek56pn9g1xsrhdncv436h7xuf9d3jwcwyw9holyepz233qumijkrvvy075q464x4ark6ajyont7ci9pws0c7ex7s89ui5uiytlkjwb0dmllhd27qcnid4n8ve6qhb8uah6z477apu2akphrczu4l1bijg185mh1hwo1uu0gh27l8skbo3dw82xfr1bx7xjzquwfw0ilrwg5u0eyoqcm7mytped2l88jgldxn6y8f18xtl7s734v5z3ofkof520egqd4cwgjyfs8s7qxxlwnydyxkbiktkf9svry31pvl2o3o116tmuxhp5s1voejqx1dz4ap01vtxb81v5kbc7esph6iwiltp9vk9pnufb9yrmq0j9lo2vjoc == \3\d\w\a\7\z\9\4\u\t\q\q\g\j\5\6\a\p\7\p\j\m\j\i\a\y\b\6\j\5\j\t\g\4\w\1\c\v\j\f\6\y\d\c\k\d\z\g\r\w\k\l\m\m\9\7\i\s\t\o\u\x\9\1\y\t\8\f\c\u\q\v\9\2\l\c\u\o\e\0\e\d\y\6\x\s\s\9\1\j\m\4\9\u\c\z\z\8\o\g\z\1\2\w\f\v\f\i\r\7\7\c\7\7\g\l\o\5\t\n\m\w\z\q\d\y\p\5\e\m\4\i\r\2\5\4\m\8\3\e\k\5\6\p\n\9\g\1\x\s\r\h\d\n\c\v\4\3\6\h\7\x\u\f\9\d\3\j\w\c\w\y\w\9\h\o\l\y\e\p\z\2\3\3\q\u\m\i\j\k\r\v\v\y\0\7\5\q\4\6\4\x\4\a\r\k\6\a\j\y\o\n\t\7\c\i\9\p\w\s\0\c\7\e\x\7\s\8\9\u\i\5\u\i\y\t\l\k\j\w\b\0\d\m\l\l\h\d\2\7\q\c\n\i\d\4\n\8\v\e\6\q\h\b\8\u\a\h\6\z\4\7\7\a\p\u\2\a\k\p\h\r\c\z\u\4\l\1\b\i\j\g\1\8\5\m\h\1\h\w\o\1\u\u\0\g\h\2\7\l\8\s\k\b\o\3\d\w\8\2\x\f\r\1\b\x\7\x\j\z\q\u\w\f\w\0\i\l\r\w\g\5\u\0\e\y\o\q\c\m\7\m\y\t\p\e\d\2\l\8\8\j\g\l\d\x\n\6\y\8\f\1\8\x\t\l\7\s\7\3\4\v\5\z\3\o\f\k\o\f\5\2\0\e\g\q\d\4\c\w\g\j\y\f\s\8\s\7\q\x\x\l\w\n\y\d\y\x\k\b\i\k\t\k\f\9\s\v\r\y\3\1\p\v\l\2\o\3\o\1\1\6\t\m\u\x\h\p\5\s\1\v\o\e\j\q\x\1\d\z\4\a\p\0\1\v\t\x\b\8\1\v\5\k\b\c\7\e\s\p\h\6\i\w\i\l\t\p\9\v\k\9\p\n\u\f\b\9\y\r\m\q\0\j\9\l\o\2\v\j\o\c ]] 00:07:53.949 00:07:53.949 real 0m1.127s 00:07:53.949 user 0m0.528s 00:07:53.949 sys 0m0.352s 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.949 ************************************ 00:07:53.949 END TEST dd_flag_nofollow 00:07:53.949 ************************************ 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:53.949 ************************************ 00:07:53.949 START TEST dd_flag_noatime 00:07:53.949 ************************************ 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732032180 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732032180 00:07:53.949 16:03:00 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:54.886 16:03:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.886 [2024-11-19 16:03:01.555378] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:54.886 [2024-11-19 16:03:01.555471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73359 ] 00:07:55.145 [2024-11-19 16:03:01.706507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.145 [2024-11-19 16:03:01.730492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.145 [2024-11-19 16:03:01.762380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.145  [2024-11-19T16:03:02.131Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.416 00:07:55.416 16:03:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.416 16:03:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732032180 )) 00:07:55.416 16:03:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.416 16:03:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732032180 )) 00:07:55.416 16:03:01 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.416 [2024-11-19 16:03:01.967935] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:55.416 [2024-11-19 16:03:01.968026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73367 ] 00:07:55.416 [2024-11-19 16:03:02.113702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.689 [2024-11-19 16:03:02.135397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.689 [2024-11-19 16:03:02.163901] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.689  [2024-11-19T16:03:02.404Z] Copying: 512/512 [B] (average 500 kBps) 00:07:55.689 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.689 ************************************ 00:07:55.689 END TEST dd_flag_noatime 00:07:55.689 ************************************ 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732032182 )) 00:07:55.689 00:07:55.689 real 0m1.818s 00:07:55.689 user 0m0.400s 00:07:55.689 sys 0m0.360s 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:55.689 ************************************ 00:07:55.689 START TEST dd_flags_misc 00:07:55.689 ************************************ 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:55.689 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:55.948 [2024-11-19 16:03:02.414887] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:55.948 [2024-11-19 16:03:02.415197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73390 ] 00:07:55.948 [2024-11-19 16:03:02.565523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.948 [2024-11-19 16:03:02.584818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.948 [2024-11-19 16:03:02.611388] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.948  [2024-11-19T16:03:02.922Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.207 00:07:56.207 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hehoogruachoussmcnhzn71l2y8jwgoxxs5i87atbz981kp0aude0soawny8mw2ilhakamtdd0bfnsrxmu61bvlfx3ufab1padhjf9dv2z5esthoyqiq6y71wm8xfgk57102o27yiz96lfgykgx1i6wt01x976m5k8za9c43b54q6fymhs1tudpoplfhudo00i0x7gx8n4buolkdp4t4dg4wu2jnzk3x9glgibnu6u5c3fpay6bu414agyuq5gogri3vpneoy4ub8hj8qfm1cteqndp1bm0ej7k0kwenb8cgu2agsm42oi7vc7ke9wmpa8t58dgabl6v8rifz13nmyt0cumkeeq235rgg9im6gfpam5dnfurcqldr3gbql71yr5gh2ddjv5742h1wujccv9b5wcco2yy2903k94u5yriepvppxvbd1pkkmgsep29rs7m54opte86nikornljwjwe8roon2h6s4bgnf0aw03wi7a2uiuggortb7jagzon == \h\e\h\o\o\g\r\u\a\c\h\o\u\s\s\m\c\n\h\z\n\7\1\l\2\y\8\j\w\g\o\x\x\s\5\i\8\7\a\t\b\z\9\8\1\k\p\0\a\u\d\e\0\s\o\a\w\n\y\8\m\w\2\i\l\h\a\k\a\m\t\d\d\0\b\f\n\s\r\x\m\u\6\1\b\v\l\f\x\3\u\f\a\b\1\p\a\d\h\j\f\9\d\v\2\z\5\e\s\t\h\o\y\q\i\q\6\y\7\1\w\m\8\x\f\g\k\5\7\1\0\2\o\2\7\y\i\z\9\6\l\f\g\y\k\g\x\1\i\6\w\t\0\1\x\9\7\6\m\5\k\8\z\a\9\c\4\3\b\5\4\q\6\f\y\m\h\s\1\t\u\d\p\o\p\l\f\h\u\d\o\0\0\i\0\x\7\g\x\8\n\4\b\u\o\l\k\d\p\4\t\4\d\g\4\w\u\2\j\n\z\k\3\x\9\g\l\g\i\b\n\u\6\u\5\c\3\f\p\a\y\6\b\u\4\1\4\a\g\y\u\q\5\g\o\g\r\i\3\v\p\n\e\o\y\4\u\b\8\h\j\8\q\f\m\1\c\t\e\q\n\d\p\1\b\m\0\e\j\7\k\0\k\w\e\n\b\8\c\g\u\2\a\g\s\m\4\2\o\i\7\v\c\7\k\e\9\w\m\p\a\8\t\5\8\d\g\a\b\l\6\v\8\r\i\f\z\1\3\n\m\y\t\0\c\u\m\k\e\e\q\2\3\5\r\g\g\9\i\m\6\g\f\p\a\m\5\d\n\f\u\r\c\q\l\d\r\3\g\b\q\l\7\1\y\r\5\g\h\2\d\d\j\v\5\7\4\2\h\1\w\u\j\c\c\v\9\b\5\w\c\c\o\2\y\y\2\9\0\3\k\9\4\u\5\y\r\i\e\p\v\p\p\x\v\b\d\1\p\k\k\m\g\s\e\p\2\9\r\s\7\m\5\4\o\p\t\e\8\6\n\i\k\o\r\n\l\j\w\j\w\e\8\r\o\o\n\2\h\6\s\4\b\g\n\f\0\a\w\0\3\w\i\7\a\2\u\i\u\g\g\o\r\t\b\7\j\a\g\z\o\n ]] 00:07:56.207 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.207 16:03:02 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:56.207 [2024-11-19 16:03:02.786399] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:56.208 [2024-11-19 16:03:02.786495] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73405 ] 00:07:56.467 [2024-11-19 16:03:02.932359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.467 [2024-11-19 16:03:02.952873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.467 [2024-11-19 16:03:02.982508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.467  [2024-11-19T16:03:03.182Z] Copying: 512/512 [B] (average 500 kBps) 00:07:56.467 00:07:56.467 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hehoogruachoussmcnhzn71l2y8jwgoxxs5i87atbz981kp0aude0soawny8mw2ilhakamtdd0bfnsrxmu61bvlfx3ufab1padhjf9dv2z5esthoyqiq6y71wm8xfgk57102o27yiz96lfgykgx1i6wt01x976m5k8za9c43b54q6fymhs1tudpoplfhudo00i0x7gx8n4buolkdp4t4dg4wu2jnzk3x9glgibnu6u5c3fpay6bu414agyuq5gogri3vpneoy4ub8hj8qfm1cteqndp1bm0ej7k0kwenb8cgu2agsm42oi7vc7ke9wmpa8t58dgabl6v8rifz13nmyt0cumkeeq235rgg9im6gfpam5dnfurcqldr3gbql71yr5gh2ddjv5742h1wujccv9b5wcco2yy2903k94u5yriepvppxvbd1pkkmgsep29rs7m54opte86nikornljwjwe8roon2h6s4bgnf0aw03wi7a2uiuggortb7jagzon == \h\e\h\o\o\g\r\u\a\c\h\o\u\s\s\m\c\n\h\z\n\7\1\l\2\y\8\j\w\g\o\x\x\s\5\i\8\7\a\t\b\z\9\8\1\k\p\0\a\u\d\e\0\s\o\a\w\n\y\8\m\w\2\i\l\h\a\k\a\m\t\d\d\0\b\f\n\s\r\x\m\u\6\1\b\v\l\f\x\3\u\f\a\b\1\p\a\d\h\j\f\9\d\v\2\z\5\e\s\t\h\o\y\q\i\q\6\y\7\1\w\m\8\x\f\g\k\5\7\1\0\2\o\2\7\y\i\z\9\6\l\f\g\y\k\g\x\1\i\6\w\t\0\1\x\9\7\6\m\5\k\8\z\a\9\c\4\3\b\5\4\q\6\f\y\m\h\s\1\t\u\d\p\o\p\l\f\h\u\d\o\0\0\i\0\x\7\g\x\8\n\4\b\u\o\l\k\d\p\4\t\4\d\g\4\w\u\2\j\n\z\k\3\x\9\g\l\g\i\b\n\u\6\u\5\c\3\f\p\a\y\6\b\u\4\1\4\a\g\y\u\q\5\g\o\g\r\i\3\v\p\n\e\o\y\4\u\b\8\h\j\8\q\f\m\1\c\t\e\q\n\d\p\1\b\m\0\e\j\7\k\0\k\w\e\n\b\8\c\g\u\2\a\g\s\m\4\2\o\i\7\v\c\7\k\e\9\w\m\p\a\8\t\5\8\d\g\a\b\l\6\v\8\r\i\f\z\1\3\n\m\y\t\0\c\u\m\k\e\e\q\2\3\5\r\g\g\9\i\m\6\g\f\p\a\m\5\d\n\f\u\r\c\q\l\d\r\3\g\b\q\l\7\1\y\r\5\g\h\2\d\d\j\v\5\7\4\2\h\1\w\u\j\c\c\v\9\b\5\w\c\c\o\2\y\y\2\9\0\3\k\9\4\u\5\y\r\i\e\p\v\p\p\x\v\b\d\1\p\k\k\m\g\s\e\p\2\9\r\s\7\m\5\4\o\p\t\e\8\6\n\i\k\o\r\n\l\j\w\j\w\e\8\r\o\o\n\2\h\6\s\4\b\g\n\f\0\a\w\0\3\w\i\7\a\2\u\i\u\g\g\o\r\t\b\7\j\a\g\z\o\n ]] 00:07:56.467 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.467 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:56.467 [2024-11-19 16:03:03.164145] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:56.467 [2024-11-19 16:03:03.164268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73409 ] 00:07:56.726 [2024-11-19 16:03:03.311561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.726 [2024-11-19 16:03:03.329812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.726 [2024-11-19 16:03:03.356324] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.726  [2024-11-19T16:03:03.700Z] Copying: 512/512 [B] (average 83 kBps) 00:07:56.985 00:07:56.985 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hehoogruachoussmcnhzn71l2y8jwgoxxs5i87atbz981kp0aude0soawny8mw2ilhakamtdd0bfnsrxmu61bvlfx3ufab1padhjf9dv2z5esthoyqiq6y71wm8xfgk57102o27yiz96lfgykgx1i6wt01x976m5k8za9c43b54q6fymhs1tudpoplfhudo00i0x7gx8n4buolkdp4t4dg4wu2jnzk3x9glgibnu6u5c3fpay6bu414agyuq5gogri3vpneoy4ub8hj8qfm1cteqndp1bm0ej7k0kwenb8cgu2agsm42oi7vc7ke9wmpa8t58dgabl6v8rifz13nmyt0cumkeeq235rgg9im6gfpam5dnfurcqldr3gbql71yr5gh2ddjv5742h1wujccv9b5wcco2yy2903k94u5yriepvppxvbd1pkkmgsep29rs7m54opte86nikornljwjwe8roon2h6s4bgnf0aw03wi7a2uiuggortb7jagzon == \h\e\h\o\o\g\r\u\a\c\h\o\u\s\s\m\c\n\h\z\n\7\1\l\2\y\8\j\w\g\o\x\x\s\5\i\8\7\a\t\b\z\9\8\1\k\p\0\a\u\d\e\0\s\o\a\w\n\y\8\m\w\2\i\l\h\a\k\a\m\t\d\d\0\b\f\n\s\r\x\m\u\6\1\b\v\l\f\x\3\u\f\a\b\1\p\a\d\h\j\f\9\d\v\2\z\5\e\s\t\h\o\y\q\i\q\6\y\7\1\w\m\8\x\f\g\k\5\7\1\0\2\o\2\7\y\i\z\9\6\l\f\g\y\k\g\x\1\i\6\w\t\0\1\x\9\7\6\m\5\k\8\z\a\9\c\4\3\b\5\4\q\6\f\y\m\h\s\1\t\u\d\p\o\p\l\f\h\u\d\o\0\0\i\0\x\7\g\x\8\n\4\b\u\o\l\k\d\p\4\t\4\d\g\4\w\u\2\j\n\z\k\3\x\9\g\l\g\i\b\n\u\6\u\5\c\3\f\p\a\y\6\b\u\4\1\4\a\g\y\u\q\5\g\o\g\r\i\3\v\p\n\e\o\y\4\u\b\8\h\j\8\q\f\m\1\c\t\e\q\n\d\p\1\b\m\0\e\j\7\k\0\k\w\e\n\b\8\c\g\u\2\a\g\s\m\4\2\o\i\7\v\c\7\k\e\9\w\m\p\a\8\t\5\8\d\g\a\b\l\6\v\8\r\i\f\z\1\3\n\m\y\t\0\c\u\m\k\e\e\q\2\3\5\r\g\g\9\i\m\6\g\f\p\a\m\5\d\n\f\u\r\c\q\l\d\r\3\g\b\q\l\7\1\y\r\5\g\h\2\d\d\j\v\5\7\4\2\h\1\w\u\j\c\c\v\9\b\5\w\c\c\o\2\y\y\2\9\0\3\k\9\4\u\5\y\r\i\e\p\v\p\p\x\v\b\d\1\p\k\k\m\g\s\e\p\2\9\r\s\7\m\5\4\o\p\t\e\8\6\n\i\k\o\r\n\l\j\w\j\w\e\8\r\o\o\n\2\h\6\s\4\b\g\n\f\0\a\w\0\3\w\i\7\a\2\u\i\u\g\g\o\r\t\b\7\j\a\g\z\o\n ]] 00:07:56.985 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:56.986 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:56.986 [2024-11-19 16:03:03.535224] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:56.986 [2024-11-19 16:03:03.535347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73414 ] 00:07:56.986 [2024-11-19 16:03:03.683416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.245 [2024-11-19 16:03:03.704663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.245 [2024-11-19 16:03:03.731290] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.245  [2024-11-19T16:03:03.960Z] Copying: 512/512 [B] (average 500 kBps) 00:07:57.245 00:07:57.245 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ hehoogruachoussmcnhzn71l2y8jwgoxxs5i87atbz981kp0aude0soawny8mw2ilhakamtdd0bfnsrxmu61bvlfx3ufab1padhjf9dv2z5esthoyqiq6y71wm8xfgk57102o27yiz96lfgykgx1i6wt01x976m5k8za9c43b54q6fymhs1tudpoplfhudo00i0x7gx8n4buolkdp4t4dg4wu2jnzk3x9glgibnu6u5c3fpay6bu414agyuq5gogri3vpneoy4ub8hj8qfm1cteqndp1bm0ej7k0kwenb8cgu2agsm42oi7vc7ke9wmpa8t58dgabl6v8rifz13nmyt0cumkeeq235rgg9im6gfpam5dnfurcqldr3gbql71yr5gh2ddjv5742h1wujccv9b5wcco2yy2903k94u5yriepvppxvbd1pkkmgsep29rs7m54opte86nikornljwjwe8roon2h6s4bgnf0aw03wi7a2uiuggortb7jagzon == \h\e\h\o\o\g\r\u\a\c\h\o\u\s\s\m\c\n\h\z\n\7\1\l\2\y\8\j\w\g\o\x\x\s\5\i\8\7\a\t\b\z\9\8\1\k\p\0\a\u\d\e\0\s\o\a\w\n\y\8\m\w\2\i\l\h\a\k\a\m\t\d\d\0\b\f\n\s\r\x\m\u\6\1\b\v\l\f\x\3\u\f\a\b\1\p\a\d\h\j\f\9\d\v\2\z\5\e\s\t\h\o\y\q\i\q\6\y\7\1\w\m\8\x\f\g\k\5\7\1\0\2\o\2\7\y\i\z\9\6\l\f\g\y\k\g\x\1\i\6\w\t\0\1\x\9\7\6\m\5\k\8\z\a\9\c\4\3\b\5\4\q\6\f\y\m\h\s\1\t\u\d\p\o\p\l\f\h\u\d\o\0\0\i\0\x\7\g\x\8\n\4\b\u\o\l\k\d\p\4\t\4\d\g\4\w\u\2\j\n\z\k\3\x\9\g\l\g\i\b\n\u\6\u\5\c\3\f\p\a\y\6\b\u\4\1\4\a\g\y\u\q\5\g\o\g\r\i\3\v\p\n\e\o\y\4\u\b\8\h\j\8\q\f\m\1\c\t\e\q\n\d\p\1\b\m\0\e\j\7\k\0\k\w\e\n\b\8\c\g\u\2\a\g\s\m\4\2\o\i\7\v\c\7\k\e\9\w\m\p\a\8\t\5\8\d\g\a\b\l\6\v\8\r\i\f\z\1\3\n\m\y\t\0\c\u\m\k\e\e\q\2\3\5\r\g\g\9\i\m\6\g\f\p\a\m\5\d\n\f\u\r\c\q\l\d\r\3\g\b\q\l\7\1\y\r\5\g\h\2\d\d\j\v\5\7\4\2\h\1\w\u\j\c\c\v\9\b\5\w\c\c\o\2\y\y\2\9\0\3\k\9\4\u\5\y\r\i\e\p\v\p\p\x\v\b\d\1\p\k\k\m\g\s\e\p\2\9\r\s\7\m\5\4\o\p\t\e\8\6\n\i\k\o\r\n\l\j\w\j\w\e\8\r\o\o\n\2\h\6\s\4\b\g\n\f\0\a\w\0\3\w\i\7\a\2\u\i\u\g\g\o\r\t\b\7\j\a\g\z\o\n ]] 00:07:57.245 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:57.245 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:57.245 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:57.245 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:57.245 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:57.245 16:03:03 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:57.245 [2024-11-19 16:03:03.915878] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:57.245 [2024-11-19 16:03:03.916153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73428 ] 00:07:57.504 [2024-11-19 16:03:04.060559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.504 [2024-11-19 16:03:04.078146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.504 [2024-11-19 16:03:04.103941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.504  [2024-11-19T16:03:04.479Z] Copying: 512/512 [B] (average 500 kBps) 00:07:57.764 00:07:57.764 16:03:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f2s4x7kzs1yh54h6s262zefutn2aucvtistrksc078dhe0lfjb36pohxq8vw8pthayb2eg6upmyse5avynpfj0znc32eixn9er4e7x2n7yn0k9jnj4roilnea4t2w4pecjidm3ycc4q2obr30h0y3msx0zc9tn1mwngp6hvbma57bs9gdaicwzjgqs2scbn79pqk317tsmcx4uupighufg7nqoh1dan7944e9zm9fiw9879eexxuyjautbrjjx8yjvmjxbw059tyo7elrk06piuohqlpmno4a27czakz2rkbgs5oczay0ibt572a2oekwa1su58ka00yemu8tcxyqu8k8ieq0w023pdsxq5si5bg2qzki7a3iivr70fgks9bnzunmayz61a2ihjszm037glpkr9dmhf4t0h6v6iczfi6ehn3n1um3sphkg6k2uzaulua7oqwy7ufq9h8lwxgs4iwlbloqnwd3val8s7qh2spbujubl9ysir9eqw692af == \f\2\s\4\x\7\k\z\s\1\y\h\5\4\h\6\s\2\6\2\z\e\f\u\t\n\2\a\u\c\v\t\i\s\t\r\k\s\c\0\7\8\d\h\e\0\l\f\j\b\3\6\p\o\h\x\q\8\v\w\8\p\t\h\a\y\b\2\e\g\6\u\p\m\y\s\e\5\a\v\y\n\p\f\j\0\z\n\c\3\2\e\i\x\n\9\e\r\4\e\7\x\2\n\7\y\n\0\k\9\j\n\j\4\r\o\i\l\n\e\a\4\t\2\w\4\p\e\c\j\i\d\m\3\y\c\c\4\q\2\o\b\r\3\0\h\0\y\3\m\s\x\0\z\c\9\t\n\1\m\w\n\g\p\6\h\v\b\m\a\5\7\b\s\9\g\d\a\i\c\w\z\j\g\q\s\2\s\c\b\n\7\9\p\q\k\3\1\7\t\s\m\c\x\4\u\u\p\i\g\h\u\f\g\7\n\q\o\h\1\d\a\n\7\9\4\4\e\9\z\m\9\f\i\w\9\8\7\9\e\e\x\x\u\y\j\a\u\t\b\r\j\j\x\8\y\j\v\m\j\x\b\w\0\5\9\t\y\o\7\e\l\r\k\0\6\p\i\u\o\h\q\l\p\m\n\o\4\a\2\7\c\z\a\k\z\2\r\k\b\g\s\5\o\c\z\a\y\0\i\b\t\5\7\2\a\2\o\e\k\w\a\1\s\u\5\8\k\a\0\0\y\e\m\u\8\t\c\x\y\q\u\8\k\8\i\e\q\0\w\0\2\3\p\d\s\x\q\5\s\i\5\b\g\2\q\z\k\i\7\a\3\i\i\v\r\7\0\f\g\k\s\9\b\n\z\u\n\m\a\y\z\6\1\a\2\i\h\j\s\z\m\0\3\7\g\l\p\k\r\9\d\m\h\f\4\t\0\h\6\v\6\i\c\z\f\i\6\e\h\n\3\n\1\u\m\3\s\p\h\k\g\6\k\2\u\z\a\u\l\u\a\7\o\q\w\y\7\u\f\q\9\h\8\l\w\x\g\s\4\i\w\l\b\l\o\q\n\w\d\3\v\a\l\8\s\7\q\h\2\s\p\b\u\j\u\b\l\9\y\s\i\r\9\e\q\w\6\9\2\a\f ]] 00:07:57.764 16:03:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:57.764 16:03:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:57.764 [2024-11-19 16:03:04.290936] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:57.764 [2024-11-19 16:03:04.291036] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73432 ] 00:07:57.764 [2024-11-19 16:03:04.432039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.764 [2024-11-19 16:03:04.449530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.023 [2024-11-19 16:03:04.476164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.023  [2024-11-19T16:03:04.738Z] Copying: 512/512 [B] (average 500 kBps) 00:07:58.023 00:07:58.023 16:03:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f2s4x7kzs1yh54h6s262zefutn2aucvtistrksc078dhe0lfjb36pohxq8vw8pthayb2eg6upmyse5avynpfj0znc32eixn9er4e7x2n7yn0k9jnj4roilnea4t2w4pecjidm3ycc4q2obr30h0y3msx0zc9tn1mwngp6hvbma57bs9gdaicwzjgqs2scbn79pqk317tsmcx4uupighufg7nqoh1dan7944e9zm9fiw9879eexxuyjautbrjjx8yjvmjxbw059tyo7elrk06piuohqlpmno4a27czakz2rkbgs5oczay0ibt572a2oekwa1su58ka00yemu8tcxyqu8k8ieq0w023pdsxq5si5bg2qzki7a3iivr70fgks9bnzunmayz61a2ihjszm037glpkr9dmhf4t0h6v6iczfi6ehn3n1um3sphkg6k2uzaulua7oqwy7ufq9h8lwxgs4iwlbloqnwd3val8s7qh2spbujubl9ysir9eqw692af == \f\2\s\4\x\7\k\z\s\1\y\h\5\4\h\6\s\2\6\2\z\e\f\u\t\n\2\a\u\c\v\t\i\s\t\r\k\s\c\0\7\8\d\h\e\0\l\f\j\b\3\6\p\o\h\x\q\8\v\w\8\p\t\h\a\y\b\2\e\g\6\u\p\m\y\s\e\5\a\v\y\n\p\f\j\0\z\n\c\3\2\e\i\x\n\9\e\r\4\e\7\x\2\n\7\y\n\0\k\9\j\n\j\4\r\o\i\l\n\e\a\4\t\2\w\4\p\e\c\j\i\d\m\3\y\c\c\4\q\2\o\b\r\3\0\h\0\y\3\m\s\x\0\z\c\9\t\n\1\m\w\n\g\p\6\h\v\b\m\a\5\7\b\s\9\g\d\a\i\c\w\z\j\g\q\s\2\s\c\b\n\7\9\p\q\k\3\1\7\t\s\m\c\x\4\u\u\p\i\g\h\u\f\g\7\n\q\o\h\1\d\a\n\7\9\4\4\e\9\z\m\9\f\i\w\9\8\7\9\e\e\x\x\u\y\j\a\u\t\b\r\j\j\x\8\y\j\v\m\j\x\b\w\0\5\9\t\y\o\7\e\l\r\k\0\6\p\i\u\o\h\q\l\p\m\n\o\4\a\2\7\c\z\a\k\z\2\r\k\b\g\s\5\o\c\z\a\y\0\i\b\t\5\7\2\a\2\o\e\k\w\a\1\s\u\5\8\k\a\0\0\y\e\m\u\8\t\c\x\y\q\u\8\k\8\i\e\q\0\w\0\2\3\p\d\s\x\q\5\s\i\5\b\g\2\q\z\k\i\7\a\3\i\i\v\r\7\0\f\g\k\s\9\b\n\z\u\n\m\a\y\z\6\1\a\2\i\h\j\s\z\m\0\3\7\g\l\p\k\r\9\d\m\h\f\4\t\0\h\6\v\6\i\c\z\f\i\6\e\h\n\3\n\1\u\m\3\s\p\h\k\g\6\k\2\u\z\a\u\l\u\a\7\o\q\w\y\7\u\f\q\9\h\8\l\w\x\g\s\4\i\w\l\b\l\o\q\n\w\d\3\v\a\l\8\s\7\q\h\2\s\p\b\u\j\u\b\l\9\y\s\i\r\9\e\q\w\6\9\2\a\f ]] 00:07:58.024 16:03:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:58.024 16:03:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:58.024 [2024-11-19 16:03:04.664064] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:58.024 [2024-11-19 16:03:04.664156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73447 ] 00:07:58.283 [2024-11-19 16:03:04.811173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.283 [2024-11-19 16:03:04.828624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.283 [2024-11-19 16:03:04.854415] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.283  [2024-11-19T16:03:04.998Z] Copying: 512/512 [B] (average 500 kBps) 00:07:58.283 00:07:58.283 16:03:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f2s4x7kzs1yh54h6s262zefutn2aucvtistrksc078dhe0lfjb36pohxq8vw8pthayb2eg6upmyse5avynpfj0znc32eixn9er4e7x2n7yn0k9jnj4roilnea4t2w4pecjidm3ycc4q2obr30h0y3msx0zc9tn1mwngp6hvbma57bs9gdaicwzjgqs2scbn79pqk317tsmcx4uupighufg7nqoh1dan7944e9zm9fiw9879eexxuyjautbrjjx8yjvmjxbw059tyo7elrk06piuohqlpmno4a27czakz2rkbgs5oczay0ibt572a2oekwa1su58ka00yemu8tcxyqu8k8ieq0w023pdsxq5si5bg2qzki7a3iivr70fgks9bnzunmayz61a2ihjszm037glpkr9dmhf4t0h6v6iczfi6ehn3n1um3sphkg6k2uzaulua7oqwy7ufq9h8lwxgs4iwlbloqnwd3val8s7qh2spbujubl9ysir9eqw692af == \f\2\s\4\x\7\k\z\s\1\y\h\5\4\h\6\s\2\6\2\z\e\f\u\t\n\2\a\u\c\v\t\i\s\t\r\k\s\c\0\7\8\d\h\e\0\l\f\j\b\3\6\p\o\h\x\q\8\v\w\8\p\t\h\a\y\b\2\e\g\6\u\p\m\y\s\e\5\a\v\y\n\p\f\j\0\z\n\c\3\2\e\i\x\n\9\e\r\4\e\7\x\2\n\7\y\n\0\k\9\j\n\j\4\r\o\i\l\n\e\a\4\t\2\w\4\p\e\c\j\i\d\m\3\y\c\c\4\q\2\o\b\r\3\0\h\0\y\3\m\s\x\0\z\c\9\t\n\1\m\w\n\g\p\6\h\v\b\m\a\5\7\b\s\9\g\d\a\i\c\w\z\j\g\q\s\2\s\c\b\n\7\9\p\q\k\3\1\7\t\s\m\c\x\4\u\u\p\i\g\h\u\f\g\7\n\q\o\h\1\d\a\n\7\9\4\4\e\9\z\m\9\f\i\w\9\8\7\9\e\e\x\x\u\y\j\a\u\t\b\r\j\j\x\8\y\j\v\m\j\x\b\w\0\5\9\t\y\o\7\e\l\r\k\0\6\p\i\u\o\h\q\l\p\m\n\o\4\a\2\7\c\z\a\k\z\2\r\k\b\g\s\5\o\c\z\a\y\0\i\b\t\5\7\2\a\2\o\e\k\w\a\1\s\u\5\8\k\a\0\0\y\e\m\u\8\t\c\x\y\q\u\8\k\8\i\e\q\0\w\0\2\3\p\d\s\x\q\5\s\i\5\b\g\2\q\z\k\i\7\a\3\i\i\v\r\7\0\f\g\k\s\9\b\n\z\u\n\m\a\y\z\6\1\a\2\i\h\j\s\z\m\0\3\7\g\l\p\k\r\9\d\m\h\f\4\t\0\h\6\v\6\i\c\z\f\i\6\e\h\n\3\n\1\u\m\3\s\p\h\k\g\6\k\2\u\z\a\u\l\u\a\7\o\q\w\y\7\u\f\q\9\h\8\l\w\x\g\s\4\i\w\l\b\l\o\q\n\w\d\3\v\a\l\8\s\7\q\h\2\s\p\b\u\j\u\b\l\9\y\s\i\r\9\e\q\w\6\9\2\a\f ]] 00:07:58.283 16:03:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:58.283 16:03:04 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:07:58.543 [2024-11-19 16:03:05.043833] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:58.543 [2024-11-19 16:03:05.043951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73451 ] 00:07:58.543 [2024-11-19 16:03:05.188648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.543 [2024-11-19 16:03:05.207280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.543 [2024-11-19 16:03:05.233321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.543  [2024-11-19T16:03:05.517Z] Copying: 512/512 [B] (average 500 kBps) 00:07:58.802 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ f2s4x7kzs1yh54h6s262zefutn2aucvtistrksc078dhe0lfjb36pohxq8vw8pthayb2eg6upmyse5avynpfj0znc32eixn9er4e7x2n7yn0k9jnj4roilnea4t2w4pecjidm3ycc4q2obr30h0y3msx0zc9tn1mwngp6hvbma57bs9gdaicwzjgqs2scbn79pqk317tsmcx4uupighufg7nqoh1dan7944e9zm9fiw9879eexxuyjautbrjjx8yjvmjxbw059tyo7elrk06piuohqlpmno4a27czakz2rkbgs5oczay0ibt572a2oekwa1su58ka00yemu8tcxyqu8k8ieq0w023pdsxq5si5bg2qzki7a3iivr70fgks9bnzunmayz61a2ihjszm037glpkr9dmhf4t0h6v6iczfi6ehn3n1um3sphkg6k2uzaulua7oqwy7ufq9h8lwxgs4iwlbloqnwd3val8s7qh2spbujubl9ysir9eqw692af == \f\2\s\4\x\7\k\z\s\1\y\h\5\4\h\6\s\2\6\2\z\e\f\u\t\n\2\a\u\c\v\t\i\s\t\r\k\s\c\0\7\8\d\h\e\0\l\f\j\b\3\6\p\o\h\x\q\8\v\w\8\p\t\h\a\y\b\2\e\g\6\u\p\m\y\s\e\5\a\v\y\n\p\f\j\0\z\n\c\3\2\e\i\x\n\9\e\r\4\e\7\x\2\n\7\y\n\0\k\9\j\n\j\4\r\o\i\l\n\e\a\4\t\2\w\4\p\e\c\j\i\d\m\3\y\c\c\4\q\2\o\b\r\3\0\h\0\y\3\m\s\x\0\z\c\9\t\n\1\m\w\n\g\p\6\h\v\b\m\a\5\7\b\s\9\g\d\a\i\c\w\z\j\g\q\s\2\s\c\b\n\7\9\p\q\k\3\1\7\t\s\m\c\x\4\u\u\p\i\g\h\u\f\g\7\n\q\o\h\1\d\a\n\7\9\4\4\e\9\z\m\9\f\i\w\9\8\7\9\e\e\x\x\u\y\j\a\u\t\b\r\j\j\x\8\y\j\v\m\j\x\b\w\0\5\9\t\y\o\7\e\l\r\k\0\6\p\i\u\o\h\q\l\p\m\n\o\4\a\2\7\c\z\a\k\z\2\r\k\b\g\s\5\o\c\z\a\y\0\i\b\t\5\7\2\a\2\o\e\k\w\a\1\s\u\5\8\k\a\0\0\y\e\m\u\8\t\c\x\y\q\u\8\k\8\i\e\q\0\w\0\2\3\p\d\s\x\q\5\s\i\5\b\g\2\q\z\k\i\7\a\3\i\i\v\r\7\0\f\g\k\s\9\b\n\z\u\n\m\a\y\z\6\1\a\2\i\h\j\s\z\m\0\3\7\g\l\p\k\r\9\d\m\h\f\4\t\0\h\6\v\6\i\c\z\f\i\6\e\h\n\3\n\1\u\m\3\s\p\h\k\g\6\k\2\u\z\a\u\l\u\a\7\o\q\w\y\7\u\f\q\9\h\8\l\w\x\g\s\4\i\w\l\b\l\o\q\n\w\d\3\v\a\l\8\s\7\q\h\2\s\p\b\u\j\u\b\l\9\y\s\i\r\9\e\q\w\6\9\2\a\f ]] 00:07:58.802 00:07:58.802 real 0m3.004s 00:07:58.802 user 0m1.443s 00:07:58.802 sys 0m1.286s 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.802 ************************************ 00:07:58.802 END TEST dd_flags_misc 00:07:58.802 ************************************ 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:07:58.802 * Second test run, disabling liburing, forcing AIO 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:58.802 ************************************ 00:07:58.802 START TEST dd_flag_append_forced_aio 00:07:58.802 ************************************ 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=q8b5fsxf97is8oet4n3kujeehl6awn7n 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:07:58.802 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:07:58.803 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:58.803 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=k7vu2duf1gnvtdb7gjffcso5f7r9st7d 00:07:58.803 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s q8b5fsxf97is8oet4n3kujeehl6awn7n 00:07:58.803 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s k7vu2duf1gnvtdb7gjffcso5f7r9st7d 00:07:58.803 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:58.803 [2024-11-19 16:03:05.468220] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:58.803 [2024-11-19 16:03:05.468323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73474 ] 00:07:59.062 [2024-11-19 16:03:05.616766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.062 [2024-11-19 16:03:05.634809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.062 [2024-11-19 16:03:05.660962] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.062  [2024-11-19T16:03:06.037Z] Copying: 32/32 [B] (average 31 kBps) 00:07:59.322 00:07:59.322 ************************************ 00:07:59.322 END TEST dd_flag_append_forced_aio 00:07:59.322 ************************************ 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ k7vu2duf1gnvtdb7gjffcso5f7r9st7dq8b5fsxf97is8oet4n3kujeehl6awn7n == \k\7\v\u\2\d\u\f\1\g\n\v\t\d\b\7\g\j\f\f\c\s\o\5\f\7\r\9\s\t\7\d\q\8\b\5\f\s\x\f\9\7\i\s\8\o\e\t\4\n\3\k\u\j\e\e\h\l\6\a\w\n\7\n ]] 00:07:59.322 00:07:59.322 real 0m0.391s 00:07:59.322 user 0m0.184s 00:07:59.322 sys 0m0.087s 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:59.322 ************************************ 00:07:59.322 START TEST dd_flag_directory_forced_aio 00:07:59.322 ************************************ 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.322 16:03:05 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:59.322 [2024-11-19 16:03:05.912680] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:59.322 [2024-11-19 16:03:05.912773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73506 ] 00:07:59.581 [2024-11-19 16:03:06.060001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.581 [2024-11-19 16:03:06.081205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.581 [2024-11-19 16:03:06.109369] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.581 [2024-11-19 16:03:06.123245] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:59.581 [2024-11-19 16:03:06.123609] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:59.581 [2024-11-19 16:03:06.123636] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.581 [2024-11-19 16:03:06.181385] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.581 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.582 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.582 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.582 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:59.582 [2024-11-19 16:03:06.288812] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:07:59.582 [2024-11-19 16:03:06.288911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73510 ] 00:07:59.841 [2024-11-19 16:03:06.426161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.841 [2024-11-19 16:03:06.443757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.841 [2024-11-19 16:03:06.469389] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.841 [2024-11-19 16:03:06.483160] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:59.841 [2024-11-19 16:03:06.483524] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:59.841 [2024-11-19 16:03:06.483549] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.841 [2024-11-19 16:03:06.537499] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.101 00:08:00.101 real 0m0.737s 00:08:00.101 user 0m0.354s 00:08:00.101 sys 0m0.176s 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.101 ************************************ 00:08:00.101 END TEST dd_flag_directory_forced_aio 00:08:00.101 ************************************ 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:00.101 ************************************ 00:08:00.101 START TEST dd_flag_nofollow_forced_aio 00:08:00.101 ************************************ 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:00.101 16:03:06 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.101 [2024-11-19 16:03:06.700549] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:00.101 [2024-11-19 16:03:06.700652] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73545 ] 00:08:00.361 [2024-11-19 16:03:06.837418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.361 [2024-11-19 16:03:06.855965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.361 [2024-11-19 16:03:06.882206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.361 [2024-11-19 16:03:06.896248] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:00.362 [2024-11-19 16:03:06.896341] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:00.362 [2024-11-19 16:03:06.896360] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.362 [2024-11-19 16:03:06.954706] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:00.362 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:00.362 [2024-11-19 16:03:07.069739] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:00.362 [2024-11-19 16:03:07.069836] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73549 ] 00:08:00.621 [2024-11-19 16:03:07.214511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.621 [2024-11-19 16:03:07.232273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.621 [2024-11-19 16:03:07.258050] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.621 [2024-11-19 16:03:07.272228] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:00.621 [2024-11-19 16:03:07.272306] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:00.621 [2024-11-19 16:03:07.272342] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.621 [2024-11-19 16:03:07.326337] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:00.881 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:00.881 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:00.881 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:00.881 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:00.881 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:00.881 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:00.881 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:00.881 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:00.881 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:00.881 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.881 [2024-11-19 16:03:07.437230] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:00.881 [2024-11-19 16:03:07.437338] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73551 ] 00:08:00.881 [2024-11-19 16:03:07.582340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.141 [2024-11-19 16:03:07.601869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.141 [2024-11-19 16:03:07.631105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.141  [2024-11-19T16:03:07.856Z] Copying: 512/512 [B] (average 500 kBps) 00:08:01.141 00:08:01.141 ************************************ 00:08:01.141 END TEST dd_flag_nofollow_forced_aio 00:08:01.141 ************************************ 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 6y20h7cxaqzcmgm9y9hegzicb3xys264rw11y9p1jv78h2tx0kl3prg0plgo2fpoegegkg73prjth9qwa03oo97nqbsflbyvfxrftron0ccgqddfqdl7xf8v5runyvw46jksk7mtd8uvebw13u0hanbxl5yafoyv71liwvgjlkx17fcgu4l0y2qqphe2yq9e4591dasd0yzhmvidpsnzf8bmh6o7ug32kubz7dq2fmkmm91cutgv3mr777jikfskcy2megrjz84iwskb7bwjmko4ex24umthase0qh2aii41gposro6tigubjeja5eim8wvokfw69z1cgynpe8kiucirr3ri5zvp0fx6o69y6ifag4ewc25bpy2y59qlk9rgmng8zwm05magn696rbq3bpkv5200ll31q2qbmykwu5pittu2n98rojit23934eo9pgqykfi4cbca1xr8wmkmoobzi0ya6sx1fyy2lpkxogzut2itigsgms7epgpto3xp == \6\y\2\0\h\7\c\x\a\q\z\c\m\g\m\9\y\9\h\e\g\z\i\c\b\3\x\y\s\2\6\4\r\w\1\1\y\9\p\1\j\v\7\8\h\2\t\x\0\k\l\3\p\r\g\0\p\l\g\o\2\f\p\o\e\g\e\g\k\g\7\3\p\r\j\t\h\9\q\w\a\0\3\o\o\9\7\n\q\b\s\f\l\b\y\v\f\x\r\f\t\r\o\n\0\c\c\g\q\d\d\f\q\d\l\7\x\f\8\v\5\r\u\n\y\v\w\4\6\j\k\s\k\7\m\t\d\8\u\v\e\b\w\1\3\u\0\h\a\n\b\x\l\5\y\a\f\o\y\v\7\1\l\i\w\v\g\j\l\k\x\1\7\f\c\g\u\4\l\0\y\2\q\q\p\h\e\2\y\q\9\e\4\5\9\1\d\a\s\d\0\y\z\h\m\v\i\d\p\s\n\z\f\8\b\m\h\6\o\7\u\g\3\2\k\u\b\z\7\d\q\2\f\m\k\m\m\9\1\c\u\t\g\v\3\m\r\7\7\7\j\i\k\f\s\k\c\y\2\m\e\g\r\j\z\8\4\i\w\s\k\b\7\b\w\j\m\k\o\4\e\x\2\4\u\m\t\h\a\s\e\0\q\h\2\a\i\i\4\1\g\p\o\s\r\o\6\t\i\g\u\b\j\e\j\a\5\e\i\m\8\w\v\o\k\f\w\6\9\z\1\c\g\y\n\p\e\8\k\i\u\c\i\r\r\3\r\i\5\z\v\p\0\f\x\6\o\6\9\y\6\i\f\a\g\4\e\w\c\2\5\b\p\y\2\y\5\9\q\l\k\9\r\g\m\n\g\8\z\w\m\0\5\m\a\g\n\6\9\6\r\b\q\3\b\p\k\v\5\2\0\0\l\l\3\1\q\2\q\b\m\y\k\w\u\5\p\i\t\t\u\2\n\9\8\r\o\j\i\t\2\3\9\3\4\e\o\9\p\g\q\y\k\f\i\4\c\b\c\a\1\x\r\8\w\m\k\m\o\o\b\z\i\0\y\a\6\s\x\1\f\y\y\2\l\p\k\x\o\g\z\u\t\2\i\t\i\g\s\g\m\s\7\e\p\g\p\t\o\3\x\p ]] 00:08:01.141 00:08:01.141 real 0m1.126s 00:08:01.141 user 0m0.541s 00:08:01.141 sys 0m0.259s 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:01.141 ************************************ 00:08:01.141 START TEST dd_flag_noatime_forced_aio 00:08:01.141 ************************************ 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732032187 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732032187 00:08:01.141 16:03:07 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:02.519 16:03:08 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.519 [2024-11-19 16:03:08.904610] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:02.519 [2024-11-19 16:03:08.904700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73597 ] 00:08:02.519 [2024-11-19 16:03:09.058070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.519 [2024-11-19 16:03:09.082961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.519 [2024-11-19 16:03:09.118036] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.519  [2024-11-19T16:03:09.492Z] Copying: 512/512 [B] (average 500 kBps) 00:08:02.777 00:08:02.777 16:03:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.777 16:03:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732032187 )) 00:08:02.777 16:03:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.778 16:03:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732032187 )) 00:08:02.778 16:03:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.778 [2024-11-19 16:03:09.341021] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:02.778 [2024-11-19 16:03:09.341283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73603 ] 00:08:02.778 [2024-11-19 16:03:09.487597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.036 [2024-11-19 16:03:09.505824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.036 [2024-11-19 16:03:09.531897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.036  [2024-11-19T16:03:09.751Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.036 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:03.036 ************************************ 00:08:03.036 END TEST dd_flag_noatime_forced_aio 00:08:03.036 ************************************ 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732032189 )) 00:08:03.036 00:08:03.036 real 0m1.848s 00:08:03.036 user 0m0.400s 00:08:03.036 sys 0m0.204s 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:03.036 ************************************ 00:08:03.036 START TEST dd_flags_misc_forced_aio 00:08:03.036 ************************************ 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.036 16:03:09 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:03.296 [2024-11-19 16:03:09.786045] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:03.296 [2024-11-19 16:03:09.786299] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73630 ] 00:08:03.296 [2024-11-19 16:03:09.931331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.296 [2024-11-19 16:03:09.949186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.296 [2024-11-19 16:03:09.975252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.296  [2024-11-19T16:03:10.270Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.555 00:08:03.555 16:03:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ weudp7cekb9ai1qhcva3s2qucov5fr4sfbwxdzno7m0alagv1ky25q8socpalmbv48p2jym6zo4jvukmwosw3m7wx84gjc3ggsllsfk79nwgy293719jmlo6f0ob3ptd6ow3lc3ia2nu4mteudxsr3a9ld0ukyyhrdegeuv39unsgtm33237tp773asx426hf30wwrvicdcdt9pprl4jw4y52t2h1lfu9wyhdaa39v640l5cq7ivo84osa9przcoho8mb2f5cwoffrr83crd257igsom7g5rl69hnzotril8qnvjsyd266bo8b7fq87pfplcknrhs7x325w70b4a4wvri8dei8jyxvnqlfc91pg7ujbrowfefd12yaokzx0lj8mmf5w79vb8i072bpwyri6pmr5pfjic4r99gg1nzef1kz5eag15bl6q1wk2lslgizi4c4jldamvac4slknxiw4mhmacpqsp5j8w2u5alrj9yk2ei6kd10nxi02vsvsm == \w\e\u\d\p\7\c\e\k\b\9\a\i\1\q\h\c\v\a\3\s\2\q\u\c\o\v\5\f\r\4\s\f\b\w\x\d\z\n\o\7\m\0\a\l\a\g\v\1\k\y\2\5\q\8\s\o\c\p\a\l\m\b\v\4\8\p\2\j\y\m\6\z\o\4\j\v\u\k\m\w\o\s\w\3\m\7\w\x\8\4\g\j\c\3\g\g\s\l\l\s\f\k\7\9\n\w\g\y\2\9\3\7\1\9\j\m\l\o\6\f\0\o\b\3\p\t\d\6\o\w\3\l\c\3\i\a\2\n\u\4\m\t\e\u\d\x\s\r\3\a\9\l\d\0\u\k\y\y\h\r\d\e\g\e\u\v\3\9\u\n\s\g\t\m\3\3\2\3\7\t\p\7\7\3\a\s\x\4\2\6\h\f\3\0\w\w\r\v\i\c\d\c\d\t\9\p\p\r\l\4\j\w\4\y\5\2\t\2\h\1\l\f\u\9\w\y\h\d\a\a\3\9\v\6\4\0\l\5\c\q\7\i\v\o\8\4\o\s\a\9\p\r\z\c\o\h\o\8\m\b\2\f\5\c\w\o\f\f\r\r\8\3\c\r\d\2\5\7\i\g\s\o\m\7\g\5\r\l\6\9\h\n\z\o\t\r\i\l\8\q\n\v\j\s\y\d\2\6\6\b\o\8\b\7\f\q\8\7\p\f\p\l\c\k\n\r\h\s\7\x\3\2\5\w\7\0\b\4\a\4\w\v\r\i\8\d\e\i\8\j\y\x\v\n\q\l\f\c\9\1\p\g\7\u\j\b\r\o\w\f\e\f\d\1\2\y\a\o\k\z\x\0\l\j\8\m\m\f\5\w\7\9\v\b\8\i\0\7\2\b\p\w\y\r\i\6\p\m\r\5\p\f\j\i\c\4\r\9\9\g\g\1\n\z\e\f\1\k\z\5\e\a\g\1\5\b\l\6\q\1\w\k\2\l\s\l\g\i\z\i\4\c\4\j\l\d\a\m\v\a\c\4\s\l\k\n\x\i\w\4\m\h\m\a\c\p\q\s\p\5\j\8\w\2\u\5\a\l\r\j\9\y\k\2\e\i\6\k\d\1\0\n\x\i\0\2\v\s\v\s\m ]] 00:08:03.555 16:03:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.555 16:03:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:03.555 [2024-11-19 16:03:10.171987] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:03.555 [2024-11-19 16:03:10.172218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73637 ] 00:08:03.814 [2024-11-19 16:03:10.316270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.814 [2024-11-19 16:03:10.333889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.814 [2024-11-19 16:03:10.359813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.814  [2024-11-19T16:03:10.529Z] Copying: 512/512 [B] (average 500 kBps) 00:08:03.814 00:08:03.814 16:03:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ weudp7cekb9ai1qhcva3s2qucov5fr4sfbwxdzno7m0alagv1ky25q8socpalmbv48p2jym6zo4jvukmwosw3m7wx84gjc3ggsllsfk79nwgy293719jmlo6f0ob3ptd6ow3lc3ia2nu4mteudxsr3a9ld0ukyyhrdegeuv39unsgtm33237tp773asx426hf30wwrvicdcdt9pprl4jw4y52t2h1lfu9wyhdaa39v640l5cq7ivo84osa9przcoho8mb2f5cwoffrr83crd257igsom7g5rl69hnzotril8qnvjsyd266bo8b7fq87pfplcknrhs7x325w70b4a4wvri8dei8jyxvnqlfc91pg7ujbrowfefd12yaokzx0lj8mmf5w79vb8i072bpwyri6pmr5pfjic4r99gg1nzef1kz5eag15bl6q1wk2lslgizi4c4jldamvac4slknxiw4mhmacpqsp5j8w2u5alrj9yk2ei6kd10nxi02vsvsm == \w\e\u\d\p\7\c\e\k\b\9\a\i\1\q\h\c\v\a\3\s\2\q\u\c\o\v\5\f\r\4\s\f\b\w\x\d\z\n\o\7\m\0\a\l\a\g\v\1\k\y\2\5\q\8\s\o\c\p\a\l\m\b\v\4\8\p\2\j\y\m\6\z\o\4\j\v\u\k\m\w\o\s\w\3\m\7\w\x\8\4\g\j\c\3\g\g\s\l\l\s\f\k\7\9\n\w\g\y\2\9\3\7\1\9\j\m\l\o\6\f\0\o\b\3\p\t\d\6\o\w\3\l\c\3\i\a\2\n\u\4\m\t\e\u\d\x\s\r\3\a\9\l\d\0\u\k\y\y\h\r\d\e\g\e\u\v\3\9\u\n\s\g\t\m\3\3\2\3\7\t\p\7\7\3\a\s\x\4\2\6\h\f\3\0\w\w\r\v\i\c\d\c\d\t\9\p\p\r\l\4\j\w\4\y\5\2\t\2\h\1\l\f\u\9\w\y\h\d\a\a\3\9\v\6\4\0\l\5\c\q\7\i\v\o\8\4\o\s\a\9\p\r\z\c\o\h\o\8\m\b\2\f\5\c\w\o\f\f\r\r\8\3\c\r\d\2\5\7\i\g\s\o\m\7\g\5\r\l\6\9\h\n\z\o\t\r\i\l\8\q\n\v\j\s\y\d\2\6\6\b\o\8\b\7\f\q\8\7\p\f\p\l\c\k\n\r\h\s\7\x\3\2\5\w\7\0\b\4\a\4\w\v\r\i\8\d\e\i\8\j\y\x\v\n\q\l\f\c\9\1\p\g\7\u\j\b\r\o\w\f\e\f\d\1\2\y\a\o\k\z\x\0\l\j\8\m\m\f\5\w\7\9\v\b\8\i\0\7\2\b\p\w\y\r\i\6\p\m\r\5\p\f\j\i\c\4\r\9\9\g\g\1\n\z\e\f\1\k\z\5\e\a\g\1\5\b\l\6\q\1\w\k\2\l\s\l\g\i\z\i\4\c\4\j\l\d\a\m\v\a\c\4\s\l\k\n\x\i\w\4\m\h\m\a\c\p\q\s\p\5\j\8\w\2\u\5\a\l\r\j\9\y\k\2\e\i\6\k\d\1\0\n\x\i\0\2\v\s\v\s\m ]] 00:08:03.814 16:03:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.814 16:03:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:04.073 [2024-11-19 16:03:10.549175] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:04.073 [2024-11-19 16:03:10.549287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73639 ] 00:08:04.073 [2024-11-19 16:03:10.699543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.073 [2024-11-19 16:03:10.721805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.073 [2024-11-19 16:03:10.748288] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.073  [2024-11-19T16:03:11.047Z] Copying: 512/512 [B] (average 166 kBps) 00:08:04.332 00:08:04.333 16:03:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ weudp7cekb9ai1qhcva3s2qucov5fr4sfbwxdzno7m0alagv1ky25q8socpalmbv48p2jym6zo4jvukmwosw3m7wx84gjc3ggsllsfk79nwgy293719jmlo6f0ob3ptd6ow3lc3ia2nu4mteudxsr3a9ld0ukyyhrdegeuv39unsgtm33237tp773asx426hf30wwrvicdcdt9pprl4jw4y52t2h1lfu9wyhdaa39v640l5cq7ivo84osa9przcoho8mb2f5cwoffrr83crd257igsom7g5rl69hnzotril8qnvjsyd266bo8b7fq87pfplcknrhs7x325w70b4a4wvri8dei8jyxvnqlfc91pg7ujbrowfefd12yaokzx0lj8mmf5w79vb8i072bpwyri6pmr5pfjic4r99gg1nzef1kz5eag15bl6q1wk2lslgizi4c4jldamvac4slknxiw4mhmacpqsp5j8w2u5alrj9yk2ei6kd10nxi02vsvsm == \w\e\u\d\p\7\c\e\k\b\9\a\i\1\q\h\c\v\a\3\s\2\q\u\c\o\v\5\f\r\4\s\f\b\w\x\d\z\n\o\7\m\0\a\l\a\g\v\1\k\y\2\5\q\8\s\o\c\p\a\l\m\b\v\4\8\p\2\j\y\m\6\z\o\4\j\v\u\k\m\w\o\s\w\3\m\7\w\x\8\4\g\j\c\3\g\g\s\l\l\s\f\k\7\9\n\w\g\y\2\9\3\7\1\9\j\m\l\o\6\f\0\o\b\3\p\t\d\6\o\w\3\l\c\3\i\a\2\n\u\4\m\t\e\u\d\x\s\r\3\a\9\l\d\0\u\k\y\y\h\r\d\e\g\e\u\v\3\9\u\n\s\g\t\m\3\3\2\3\7\t\p\7\7\3\a\s\x\4\2\6\h\f\3\0\w\w\r\v\i\c\d\c\d\t\9\p\p\r\l\4\j\w\4\y\5\2\t\2\h\1\l\f\u\9\w\y\h\d\a\a\3\9\v\6\4\0\l\5\c\q\7\i\v\o\8\4\o\s\a\9\p\r\z\c\o\h\o\8\m\b\2\f\5\c\w\o\f\f\r\r\8\3\c\r\d\2\5\7\i\g\s\o\m\7\g\5\r\l\6\9\h\n\z\o\t\r\i\l\8\q\n\v\j\s\y\d\2\6\6\b\o\8\b\7\f\q\8\7\p\f\p\l\c\k\n\r\h\s\7\x\3\2\5\w\7\0\b\4\a\4\w\v\r\i\8\d\e\i\8\j\y\x\v\n\q\l\f\c\9\1\p\g\7\u\j\b\r\o\w\f\e\f\d\1\2\y\a\o\k\z\x\0\l\j\8\m\m\f\5\w\7\9\v\b\8\i\0\7\2\b\p\w\y\r\i\6\p\m\r\5\p\f\j\i\c\4\r\9\9\g\g\1\n\z\e\f\1\k\z\5\e\a\g\1\5\b\l\6\q\1\w\k\2\l\s\l\g\i\z\i\4\c\4\j\l\d\a\m\v\a\c\4\s\l\k\n\x\i\w\4\m\h\m\a\c\p\q\s\p\5\j\8\w\2\u\5\a\l\r\j\9\y\k\2\e\i\6\k\d\1\0\n\x\i\0\2\v\s\v\s\m ]] 00:08:04.333 16:03:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.333 16:03:10 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:04.333 [2024-11-19 16:03:10.926089] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:04.333 [2024-11-19 16:03:10.926186] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73652 ] 00:08:04.591 [2024-11-19 16:03:11.060634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.591 [2024-11-19 16:03:11.078747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.591 [2024-11-19 16:03:11.104585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.591  [2024-11-19T16:03:11.307Z] Copying: 512/512 [B] (average 500 kBps) 00:08:04.592 00:08:04.592 16:03:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ weudp7cekb9ai1qhcva3s2qucov5fr4sfbwxdzno7m0alagv1ky25q8socpalmbv48p2jym6zo4jvukmwosw3m7wx84gjc3ggsllsfk79nwgy293719jmlo6f0ob3ptd6ow3lc3ia2nu4mteudxsr3a9ld0ukyyhrdegeuv39unsgtm33237tp773asx426hf30wwrvicdcdt9pprl4jw4y52t2h1lfu9wyhdaa39v640l5cq7ivo84osa9przcoho8mb2f5cwoffrr83crd257igsom7g5rl69hnzotril8qnvjsyd266bo8b7fq87pfplcknrhs7x325w70b4a4wvri8dei8jyxvnqlfc91pg7ujbrowfefd12yaokzx0lj8mmf5w79vb8i072bpwyri6pmr5pfjic4r99gg1nzef1kz5eag15bl6q1wk2lslgizi4c4jldamvac4slknxiw4mhmacpqsp5j8w2u5alrj9yk2ei6kd10nxi02vsvsm == \w\e\u\d\p\7\c\e\k\b\9\a\i\1\q\h\c\v\a\3\s\2\q\u\c\o\v\5\f\r\4\s\f\b\w\x\d\z\n\o\7\m\0\a\l\a\g\v\1\k\y\2\5\q\8\s\o\c\p\a\l\m\b\v\4\8\p\2\j\y\m\6\z\o\4\j\v\u\k\m\w\o\s\w\3\m\7\w\x\8\4\g\j\c\3\g\g\s\l\l\s\f\k\7\9\n\w\g\y\2\9\3\7\1\9\j\m\l\o\6\f\0\o\b\3\p\t\d\6\o\w\3\l\c\3\i\a\2\n\u\4\m\t\e\u\d\x\s\r\3\a\9\l\d\0\u\k\y\y\h\r\d\e\g\e\u\v\3\9\u\n\s\g\t\m\3\3\2\3\7\t\p\7\7\3\a\s\x\4\2\6\h\f\3\0\w\w\r\v\i\c\d\c\d\t\9\p\p\r\l\4\j\w\4\y\5\2\t\2\h\1\l\f\u\9\w\y\h\d\a\a\3\9\v\6\4\0\l\5\c\q\7\i\v\o\8\4\o\s\a\9\p\r\z\c\o\h\o\8\m\b\2\f\5\c\w\o\f\f\r\r\8\3\c\r\d\2\5\7\i\g\s\o\m\7\g\5\r\l\6\9\h\n\z\o\t\r\i\l\8\q\n\v\j\s\y\d\2\6\6\b\o\8\b\7\f\q\8\7\p\f\p\l\c\k\n\r\h\s\7\x\3\2\5\w\7\0\b\4\a\4\w\v\r\i\8\d\e\i\8\j\y\x\v\n\q\l\f\c\9\1\p\g\7\u\j\b\r\o\w\f\e\f\d\1\2\y\a\o\k\z\x\0\l\j\8\m\m\f\5\w\7\9\v\b\8\i\0\7\2\b\p\w\y\r\i\6\p\m\r\5\p\f\j\i\c\4\r\9\9\g\g\1\n\z\e\f\1\k\z\5\e\a\g\1\5\b\l\6\q\1\w\k\2\l\s\l\g\i\z\i\4\c\4\j\l\d\a\m\v\a\c\4\s\l\k\n\x\i\w\4\m\h\m\a\c\p\q\s\p\5\j\8\w\2\u\5\a\l\r\j\9\y\k\2\e\i\6\k\d\1\0\n\x\i\0\2\v\s\v\s\m ]] 00:08:04.592 16:03:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:04.592 16:03:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:04.592 16:03:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:04.592 16:03:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:04.592 16:03:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.592 16:03:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:04.851 [2024-11-19 16:03:11.309645] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:04.851 [2024-11-19 16:03:11.309937] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73654 ] 00:08:04.851 [2024-11-19 16:03:11.455388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.851 [2024-11-19 16:03:11.473242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.851 [2024-11-19 16:03:11.499545] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.851  [2024-11-19T16:03:11.825Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.110 00:08:05.110 16:03:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hdeuc4jdkpfzfrv8pe15crh3xmm6nvps7mth3o7ye9grhhy7o1gln9up9msaviune9xny9z677r7576uatqzclip63i1s9potjmysd7vr10897inpainjlcwv7xz1kszg3ig2m0ls2d90918aon02r7bvkxb1nmpo033xkwxcze0o0z5j79f4vcv1310hgzx4ztmhv730f657p4ihf6chik4b5ien86rrteercufzig8lmm4ti4w1ou8e6h67lhn1eob4bdhiknuj6cdwci2cpjd386ee1f0mavilpf6h5n0lt5oqlt7yggteo59zt83j6ukxlo71i1ehx9ok6jm8shh4hmjj43udjjmcxz07eaf0jaokvj0pmnanzmxqi7yvx2th8ewwogynarxhno1eex88b28ufw8wqcg4icmyx328qvvb8obxzpoypckcf9t0ev8v60fzoycnew4ztrlvk70cv857vtj4fhfcst5c5lkfnzsymjowoeu8fswmdiu == \h\d\e\u\c\4\j\d\k\p\f\z\f\r\v\8\p\e\1\5\c\r\h\3\x\m\m\6\n\v\p\s\7\m\t\h\3\o\7\y\e\9\g\r\h\h\y\7\o\1\g\l\n\9\u\p\9\m\s\a\v\i\u\n\e\9\x\n\y\9\z\6\7\7\r\7\5\7\6\u\a\t\q\z\c\l\i\p\6\3\i\1\s\9\p\o\t\j\m\y\s\d\7\v\r\1\0\8\9\7\i\n\p\a\i\n\j\l\c\w\v\7\x\z\1\k\s\z\g\3\i\g\2\m\0\l\s\2\d\9\0\9\1\8\a\o\n\0\2\r\7\b\v\k\x\b\1\n\m\p\o\0\3\3\x\k\w\x\c\z\e\0\o\0\z\5\j\7\9\f\4\v\c\v\1\3\1\0\h\g\z\x\4\z\t\m\h\v\7\3\0\f\6\5\7\p\4\i\h\f\6\c\h\i\k\4\b\5\i\e\n\8\6\r\r\t\e\e\r\c\u\f\z\i\g\8\l\m\m\4\t\i\4\w\1\o\u\8\e\6\h\6\7\l\h\n\1\e\o\b\4\b\d\h\i\k\n\u\j\6\c\d\w\c\i\2\c\p\j\d\3\8\6\e\e\1\f\0\m\a\v\i\l\p\f\6\h\5\n\0\l\t\5\o\q\l\t\7\y\g\g\t\e\o\5\9\z\t\8\3\j\6\u\k\x\l\o\7\1\i\1\e\h\x\9\o\k\6\j\m\8\s\h\h\4\h\m\j\j\4\3\u\d\j\j\m\c\x\z\0\7\e\a\f\0\j\a\o\k\v\j\0\p\m\n\a\n\z\m\x\q\i\7\y\v\x\2\t\h\8\e\w\w\o\g\y\n\a\r\x\h\n\o\1\e\e\x\8\8\b\2\8\u\f\w\8\w\q\c\g\4\i\c\m\y\x\3\2\8\q\v\v\b\8\o\b\x\z\p\o\y\p\c\k\c\f\9\t\0\e\v\8\v\6\0\f\z\o\y\c\n\e\w\4\z\t\r\l\v\k\7\0\c\v\8\5\7\v\t\j\4\f\h\f\c\s\t\5\c\5\l\k\f\n\z\s\y\m\j\o\w\o\e\u\8\f\s\w\m\d\i\u ]] 00:08:05.110 16:03:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:05.110 16:03:11 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:05.110 [2024-11-19 16:03:11.691901] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:05.110 [2024-11-19 16:03:11.691994] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73656 ] 00:08:05.370 [2024-11-19 16:03:11.835352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.370 [2024-11-19 16:03:11.854826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.370 [2024-11-19 16:03:11.884601] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.370  [2024-11-19T16:03:12.085Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.370 00:08:05.370 16:03:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hdeuc4jdkpfzfrv8pe15crh3xmm6nvps7mth3o7ye9grhhy7o1gln9up9msaviune9xny9z677r7576uatqzclip63i1s9potjmysd7vr10897inpainjlcwv7xz1kszg3ig2m0ls2d90918aon02r7bvkxb1nmpo033xkwxcze0o0z5j79f4vcv1310hgzx4ztmhv730f657p4ihf6chik4b5ien86rrteercufzig8lmm4ti4w1ou8e6h67lhn1eob4bdhiknuj6cdwci2cpjd386ee1f0mavilpf6h5n0lt5oqlt7yggteo59zt83j6ukxlo71i1ehx9ok6jm8shh4hmjj43udjjmcxz07eaf0jaokvj0pmnanzmxqi7yvx2th8ewwogynarxhno1eex88b28ufw8wqcg4icmyx328qvvb8obxzpoypckcf9t0ev8v60fzoycnew4ztrlvk70cv857vtj4fhfcst5c5lkfnzsymjowoeu8fswmdiu == \h\d\e\u\c\4\j\d\k\p\f\z\f\r\v\8\p\e\1\5\c\r\h\3\x\m\m\6\n\v\p\s\7\m\t\h\3\o\7\y\e\9\g\r\h\h\y\7\o\1\g\l\n\9\u\p\9\m\s\a\v\i\u\n\e\9\x\n\y\9\z\6\7\7\r\7\5\7\6\u\a\t\q\z\c\l\i\p\6\3\i\1\s\9\p\o\t\j\m\y\s\d\7\v\r\1\0\8\9\7\i\n\p\a\i\n\j\l\c\w\v\7\x\z\1\k\s\z\g\3\i\g\2\m\0\l\s\2\d\9\0\9\1\8\a\o\n\0\2\r\7\b\v\k\x\b\1\n\m\p\o\0\3\3\x\k\w\x\c\z\e\0\o\0\z\5\j\7\9\f\4\v\c\v\1\3\1\0\h\g\z\x\4\z\t\m\h\v\7\3\0\f\6\5\7\p\4\i\h\f\6\c\h\i\k\4\b\5\i\e\n\8\6\r\r\t\e\e\r\c\u\f\z\i\g\8\l\m\m\4\t\i\4\w\1\o\u\8\e\6\h\6\7\l\h\n\1\e\o\b\4\b\d\h\i\k\n\u\j\6\c\d\w\c\i\2\c\p\j\d\3\8\6\e\e\1\f\0\m\a\v\i\l\p\f\6\h\5\n\0\l\t\5\o\q\l\t\7\y\g\g\t\e\o\5\9\z\t\8\3\j\6\u\k\x\l\o\7\1\i\1\e\h\x\9\o\k\6\j\m\8\s\h\h\4\h\m\j\j\4\3\u\d\j\j\m\c\x\z\0\7\e\a\f\0\j\a\o\k\v\j\0\p\m\n\a\n\z\m\x\q\i\7\y\v\x\2\t\h\8\e\w\w\o\g\y\n\a\r\x\h\n\o\1\e\e\x\8\8\b\2\8\u\f\w\8\w\q\c\g\4\i\c\m\y\x\3\2\8\q\v\v\b\8\o\b\x\z\p\o\y\p\c\k\c\f\9\t\0\e\v\8\v\6\0\f\z\o\y\c\n\e\w\4\z\t\r\l\v\k\7\0\c\v\8\5\7\v\t\j\4\f\h\f\c\s\t\5\c\5\l\k\f\n\z\s\y\m\j\o\w\o\e\u\8\f\s\w\m\d\i\u ]] 00:08:05.370 16:03:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:05.370 16:03:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:05.370 [2024-11-19 16:03:12.080570] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:05.370 [2024-11-19 16:03:12.080843] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73669 ] 00:08:05.629 [2024-11-19 16:03:12.225111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.629 [2024-11-19 16:03:12.242908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.629 [2024-11-19 16:03:12.269328] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.629  [2024-11-19T16:03:12.603Z] Copying: 512/512 [B] (average 500 kBps) 00:08:05.888 00:08:05.888 16:03:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hdeuc4jdkpfzfrv8pe15crh3xmm6nvps7mth3o7ye9grhhy7o1gln9up9msaviune9xny9z677r7576uatqzclip63i1s9potjmysd7vr10897inpainjlcwv7xz1kszg3ig2m0ls2d90918aon02r7bvkxb1nmpo033xkwxcze0o0z5j79f4vcv1310hgzx4ztmhv730f657p4ihf6chik4b5ien86rrteercufzig8lmm4ti4w1ou8e6h67lhn1eob4bdhiknuj6cdwci2cpjd386ee1f0mavilpf6h5n0lt5oqlt7yggteo59zt83j6ukxlo71i1ehx9ok6jm8shh4hmjj43udjjmcxz07eaf0jaokvj0pmnanzmxqi7yvx2th8ewwogynarxhno1eex88b28ufw8wqcg4icmyx328qvvb8obxzpoypckcf9t0ev8v60fzoycnew4ztrlvk70cv857vtj4fhfcst5c5lkfnzsymjowoeu8fswmdiu == \h\d\e\u\c\4\j\d\k\p\f\z\f\r\v\8\p\e\1\5\c\r\h\3\x\m\m\6\n\v\p\s\7\m\t\h\3\o\7\y\e\9\g\r\h\h\y\7\o\1\g\l\n\9\u\p\9\m\s\a\v\i\u\n\e\9\x\n\y\9\z\6\7\7\r\7\5\7\6\u\a\t\q\z\c\l\i\p\6\3\i\1\s\9\p\o\t\j\m\y\s\d\7\v\r\1\0\8\9\7\i\n\p\a\i\n\j\l\c\w\v\7\x\z\1\k\s\z\g\3\i\g\2\m\0\l\s\2\d\9\0\9\1\8\a\o\n\0\2\r\7\b\v\k\x\b\1\n\m\p\o\0\3\3\x\k\w\x\c\z\e\0\o\0\z\5\j\7\9\f\4\v\c\v\1\3\1\0\h\g\z\x\4\z\t\m\h\v\7\3\0\f\6\5\7\p\4\i\h\f\6\c\h\i\k\4\b\5\i\e\n\8\6\r\r\t\e\e\r\c\u\f\z\i\g\8\l\m\m\4\t\i\4\w\1\o\u\8\e\6\h\6\7\l\h\n\1\e\o\b\4\b\d\h\i\k\n\u\j\6\c\d\w\c\i\2\c\p\j\d\3\8\6\e\e\1\f\0\m\a\v\i\l\p\f\6\h\5\n\0\l\t\5\o\q\l\t\7\y\g\g\t\e\o\5\9\z\t\8\3\j\6\u\k\x\l\o\7\1\i\1\e\h\x\9\o\k\6\j\m\8\s\h\h\4\h\m\j\j\4\3\u\d\j\j\m\c\x\z\0\7\e\a\f\0\j\a\o\k\v\j\0\p\m\n\a\n\z\m\x\q\i\7\y\v\x\2\t\h\8\e\w\w\o\g\y\n\a\r\x\h\n\o\1\e\e\x\8\8\b\2\8\u\f\w\8\w\q\c\g\4\i\c\m\y\x\3\2\8\q\v\v\b\8\o\b\x\z\p\o\y\p\c\k\c\f\9\t\0\e\v\8\v\6\0\f\z\o\y\c\n\e\w\4\z\t\r\l\v\k\7\0\c\v\8\5\7\v\t\j\4\f\h\f\c\s\t\5\c\5\l\k\f\n\z\s\y\m\j\o\w\o\e\u\8\f\s\w\m\d\i\u ]] 00:08:05.888 16:03:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:05.888 16:03:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:05.888 [2024-11-19 16:03:12.468185] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:05.888 [2024-11-19 16:03:12.468294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73671 ] 00:08:06.148 [2024-11-19 16:03:12.612821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.148 [2024-11-19 16:03:12.630476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.148 [2024-11-19 16:03:12.656229] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.148  [2024-11-19T16:03:12.863Z] Copying: 512/512 [B] (average 500 kBps) 00:08:06.148 00:08:06.148 ************************************ 00:08:06.148 END TEST dd_flags_misc_forced_aio 00:08:06.148 ************************************ 00:08:06.148 16:03:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ hdeuc4jdkpfzfrv8pe15crh3xmm6nvps7mth3o7ye9grhhy7o1gln9up9msaviune9xny9z677r7576uatqzclip63i1s9potjmysd7vr10897inpainjlcwv7xz1kszg3ig2m0ls2d90918aon02r7bvkxb1nmpo033xkwxcze0o0z5j79f4vcv1310hgzx4ztmhv730f657p4ihf6chik4b5ien86rrteercufzig8lmm4ti4w1ou8e6h67lhn1eob4bdhiknuj6cdwci2cpjd386ee1f0mavilpf6h5n0lt5oqlt7yggteo59zt83j6ukxlo71i1ehx9ok6jm8shh4hmjj43udjjmcxz07eaf0jaokvj0pmnanzmxqi7yvx2th8ewwogynarxhno1eex88b28ufw8wqcg4icmyx328qvvb8obxzpoypckcf9t0ev8v60fzoycnew4ztrlvk70cv857vtj4fhfcst5c5lkfnzsymjowoeu8fswmdiu == \h\d\e\u\c\4\j\d\k\p\f\z\f\r\v\8\p\e\1\5\c\r\h\3\x\m\m\6\n\v\p\s\7\m\t\h\3\o\7\y\e\9\g\r\h\h\y\7\o\1\g\l\n\9\u\p\9\m\s\a\v\i\u\n\e\9\x\n\y\9\z\6\7\7\r\7\5\7\6\u\a\t\q\z\c\l\i\p\6\3\i\1\s\9\p\o\t\j\m\y\s\d\7\v\r\1\0\8\9\7\i\n\p\a\i\n\j\l\c\w\v\7\x\z\1\k\s\z\g\3\i\g\2\m\0\l\s\2\d\9\0\9\1\8\a\o\n\0\2\r\7\b\v\k\x\b\1\n\m\p\o\0\3\3\x\k\w\x\c\z\e\0\o\0\z\5\j\7\9\f\4\v\c\v\1\3\1\0\h\g\z\x\4\z\t\m\h\v\7\3\0\f\6\5\7\p\4\i\h\f\6\c\h\i\k\4\b\5\i\e\n\8\6\r\r\t\e\e\r\c\u\f\z\i\g\8\l\m\m\4\t\i\4\w\1\o\u\8\e\6\h\6\7\l\h\n\1\e\o\b\4\b\d\h\i\k\n\u\j\6\c\d\w\c\i\2\c\p\j\d\3\8\6\e\e\1\f\0\m\a\v\i\l\p\f\6\h\5\n\0\l\t\5\o\q\l\t\7\y\g\g\t\e\o\5\9\z\t\8\3\j\6\u\k\x\l\o\7\1\i\1\e\h\x\9\o\k\6\j\m\8\s\h\h\4\h\m\j\j\4\3\u\d\j\j\m\c\x\z\0\7\e\a\f\0\j\a\o\k\v\j\0\p\m\n\a\n\z\m\x\q\i\7\y\v\x\2\t\h\8\e\w\w\o\g\y\n\a\r\x\h\n\o\1\e\e\x\8\8\b\2\8\u\f\w\8\w\q\c\g\4\i\c\m\y\x\3\2\8\q\v\v\b\8\o\b\x\z\p\o\y\p\c\k\c\f\9\t\0\e\v\8\v\6\0\f\z\o\y\c\n\e\w\4\z\t\r\l\v\k\7\0\c\v\8\5\7\v\t\j\4\f\h\f\c\s\t\5\c\5\l\k\f\n\z\s\y\m\j\o\w\o\e\u\8\f\s\w\m\d\i\u ]] 00:08:06.148 00:08:06.148 real 0m3.096s 00:08:06.148 user 0m1.422s 00:08:06.148 sys 0m0.689s 00:08:06.148 16:03:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.148 16:03:12 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:06.407 16:03:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:06.407 16:03:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:06.407 16:03:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:06.407 ************************************ 00:08:06.407 END TEST spdk_dd_posix 00:08:06.407 ************************************ 00:08:06.407 00:08:06.407 real 0m15.049s 00:08:06.407 user 0m6.135s 00:08:06.407 sys 0m4.144s 00:08:06.407 16:03:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.407 16:03:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:06.407 16:03:12 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:06.407 16:03:12 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.407 16:03:12 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.407 16:03:12 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:06.407 ************************************ 00:08:06.407 START TEST spdk_dd_malloc 00:08:06.407 ************************************ 00:08:06.407 16:03:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:06.407 * Looking for test storage... 00:08:06.407 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:06.407 16:03:12 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.407 16:03:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.408 --rc genhtml_branch_coverage=1 00:08:06.408 --rc genhtml_function_coverage=1 00:08:06.408 --rc genhtml_legend=1 00:08:06.408 --rc geninfo_all_blocks=1 00:08:06.408 --rc geninfo_unexecuted_blocks=1 00:08:06.408 00:08:06.408 ' 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.408 --rc genhtml_branch_coverage=1 00:08:06.408 --rc genhtml_function_coverage=1 00:08:06.408 --rc genhtml_legend=1 00:08:06.408 --rc geninfo_all_blocks=1 00:08:06.408 --rc geninfo_unexecuted_blocks=1 00:08:06.408 00:08:06.408 ' 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.408 --rc genhtml_branch_coverage=1 00:08:06.408 --rc genhtml_function_coverage=1 00:08:06.408 --rc genhtml_legend=1 00:08:06.408 --rc geninfo_all_blocks=1 00:08:06.408 --rc geninfo_unexecuted_blocks=1 00:08:06.408 00:08:06.408 ' 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.408 --rc genhtml_branch_coverage=1 00:08:06.408 --rc genhtml_function_coverage=1 00:08:06.408 --rc genhtml_legend=1 00:08:06.408 --rc geninfo_all_blocks=1 00:08:06.408 --rc geninfo_unexecuted_blocks=1 00:08:06.408 00:08:06.408 ' 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.408 16:03:13 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:06.667 ************************************ 00:08:06.667 START TEST dd_malloc_copy 00:08:06.667 ************************************ 00:08:06.667 16:03:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:06.667 16:03:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:06.667 16:03:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:06.667 16:03:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:06.667 16:03:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:06.667 16:03:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:06.667 16:03:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:06.667 16:03:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:06.667 16:03:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:06.667 16:03:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:06.667 16:03:13 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:06.667 [2024-11-19 16:03:13.180136] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:06.667 [2024-11-19 16:03:13.180403] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73753 ] 00:08:06.667 { 00:08:06.667 "subsystems": [ 00:08:06.667 { 00:08:06.667 "subsystem": "bdev", 00:08:06.667 "config": [ 00:08:06.667 { 00:08:06.667 "params": { 00:08:06.667 "block_size": 512, 00:08:06.667 "num_blocks": 1048576, 00:08:06.667 "name": "malloc0" 00:08:06.667 }, 00:08:06.667 "method": "bdev_malloc_create" 00:08:06.667 }, 00:08:06.667 { 00:08:06.667 "params": { 00:08:06.667 "block_size": 512, 00:08:06.667 "num_blocks": 1048576, 00:08:06.667 "name": "malloc1" 00:08:06.667 }, 00:08:06.667 "method": "bdev_malloc_create" 00:08:06.667 }, 00:08:06.667 { 00:08:06.667 "method": "bdev_wait_for_examine" 00:08:06.667 } 00:08:06.667 ] 00:08:06.667 } 00:08:06.667 ] 00:08:06.667 } 00:08:06.667 [2024-11-19 16:03:13.327822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.667 [2024-11-19 16:03:13.347099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.667 [2024-11-19 16:03:13.379296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.045  [2024-11-19T16:03:15.697Z] Copying: 240/512 [MB] (240 MBps) [2024-11-19T16:03:15.697Z] Copying: 482/512 [MB] (242 MBps) [2024-11-19T16:03:16.265Z] Copying: 512/512 [MB] (average 241 MBps) 00:08:09.550 00:08:09.550 16:03:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:09.550 16:03:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:09.550 16:03:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:09.550 16:03:15 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:09.550 { 00:08:09.550 "subsystems": [ 00:08:09.550 { 00:08:09.550 "subsystem": "bdev", 00:08:09.550 "config": [ 00:08:09.550 { 00:08:09.550 "params": { 00:08:09.550 "block_size": 512, 00:08:09.550 "num_blocks": 1048576, 00:08:09.550 "name": "malloc0" 00:08:09.550 }, 00:08:09.550 "method": "bdev_malloc_create" 00:08:09.550 }, 00:08:09.550 { 00:08:09.550 "params": { 00:08:09.550 "block_size": 512, 00:08:09.550 "num_blocks": 1048576, 00:08:09.550 "name": "malloc1" 00:08:09.550 }, 00:08:09.550 "method": "bdev_malloc_create" 00:08:09.550 }, 00:08:09.550 { 00:08:09.550 "method": "bdev_wait_for_examine" 00:08:09.550 } 00:08:09.550 ] 00:08:09.550 } 00:08:09.550 ] 00:08:09.550 } 00:08:09.550 [2024-11-19 16:03:16.023824] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:09.550 [2024-11-19 16:03:16.023915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73790 ] 00:08:09.550 [2024-11-19 16:03:16.174174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.550 [2024-11-19 16:03:16.195547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.550 [2024-11-19 16:03:16.222172] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.928  [2024-11-19T16:03:18.580Z] Copying: 239/512 [MB] (239 MBps) [2024-11-19T16:03:18.580Z] Copying: 479/512 [MB] (239 MBps) [2024-11-19T16:03:18.839Z] Copying: 512/512 [MB] (average 240 MBps) 00:08:12.124 00:08:12.124 ************************************ 00:08:12.124 END TEST dd_malloc_copy 00:08:12.124 ************************************ 00:08:12.124 00:08:12.124 real 0m5.691s 00:08:12.124 user 0m5.060s 00:08:12.124 sys 0m0.470s 00:08:12.124 16:03:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.124 16:03:18 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:12.384 ************************************ 00:08:12.384 END TEST spdk_dd_malloc 00:08:12.384 ************************************ 00:08:12.384 00:08:12.384 real 0m5.944s 00:08:12.384 user 0m5.202s 00:08:12.384 sys 0m0.577s 00:08:12.384 16:03:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.384 16:03:18 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:12.384 16:03:18 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:12.384 16:03:18 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:12.384 16:03:18 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.384 16:03:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:12.384 ************************************ 00:08:12.384 START TEST spdk_dd_bdev_to_bdev 00:08:12.384 ************************************ 00:08:12.384 16:03:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:12.384 * Looking for test storage... 00:08:12.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:12.384 16:03:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:12.384 16:03:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:12.384 16:03:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.384 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:12.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.644 --rc genhtml_branch_coverage=1 00:08:12.644 --rc genhtml_function_coverage=1 00:08:12.644 --rc genhtml_legend=1 00:08:12.644 --rc geninfo_all_blocks=1 00:08:12.644 --rc geninfo_unexecuted_blocks=1 00:08:12.644 00:08:12.644 ' 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:12.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.644 --rc genhtml_branch_coverage=1 00:08:12.644 --rc genhtml_function_coverage=1 00:08:12.644 --rc genhtml_legend=1 00:08:12.644 --rc geninfo_all_blocks=1 00:08:12.644 --rc geninfo_unexecuted_blocks=1 00:08:12.644 00:08:12.644 ' 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:12.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.644 --rc genhtml_branch_coverage=1 00:08:12.644 --rc genhtml_function_coverage=1 00:08:12.644 --rc genhtml_legend=1 00:08:12.644 --rc geninfo_all_blocks=1 00:08:12.644 --rc geninfo_unexecuted_blocks=1 00:08:12.644 00:08:12.644 ' 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:12.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.644 --rc genhtml_branch_coverage=1 00:08:12.644 --rc genhtml_function_coverage=1 00:08:12.644 --rc genhtml_legend=1 00:08:12.644 --rc geninfo_all_blocks=1 00:08:12.644 --rc geninfo_unexecuted_blocks=1 00:08:12.644 00:08:12.644 ' 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.644 ************************************ 00:08:12.644 START TEST dd_inflate_file 00:08:12.644 ************************************ 00:08:12.644 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:12.644 [2024-11-19 16:03:19.184264] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:12.644 [2024-11-19 16:03:19.184356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73897 ] 00:08:12.644 [2024-11-19 16:03:19.330657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.644 [2024-11-19 16:03:19.348406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.903 [2024-11-19 16:03:19.375219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:12.903  [2024-11-19T16:03:19.618Z] Copying: 64/64 [MB] (average 1641 MBps) 00:08:12.903 00:08:12.903 ************************************ 00:08:12.903 END TEST dd_inflate_file 00:08:12.903 ************************************ 00:08:12.903 00:08:12.903 real 0m0.400s 00:08:12.903 user 0m0.203s 00:08:12.903 sys 0m0.212s 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:12.903 ************************************ 00:08:12.903 START TEST dd_copy_to_out_bdev 00:08:12.903 ************************************ 00:08:12.903 16:03:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:13.163 { 00:08:13.163 "subsystems": [ 00:08:13.163 { 00:08:13.163 "subsystem": "bdev", 00:08:13.163 "config": [ 00:08:13.163 { 00:08:13.163 "params": { 00:08:13.163 "trtype": "pcie", 00:08:13.163 "traddr": "0000:00:10.0", 00:08:13.163 "name": "Nvme0" 00:08:13.163 }, 00:08:13.163 "method": "bdev_nvme_attach_controller" 00:08:13.163 }, 00:08:13.163 { 00:08:13.163 "params": { 00:08:13.163 "trtype": "pcie", 00:08:13.163 "traddr": "0000:00:11.0", 00:08:13.163 "name": "Nvme1" 00:08:13.163 }, 00:08:13.163 "method": "bdev_nvme_attach_controller" 00:08:13.163 }, 00:08:13.163 { 00:08:13.163 "method": "bdev_wait_for_examine" 00:08:13.163 } 00:08:13.163 ] 00:08:13.163 } 00:08:13.163 ] 00:08:13.163 } 00:08:13.163 [2024-11-19 16:03:19.658485] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:13.163 [2024-11-19 16:03:19.658619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73930 ] 00:08:13.163 [2024-11-19 16:03:19.803791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.163 [2024-11-19 16:03:19.822988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.163 [2024-11-19 16:03:19.849727] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.542  [2024-11-19T16:03:21.257Z] Copying: 53/64 [MB] (53 MBps) [2024-11-19T16:03:21.516Z] Copying: 64/64 [MB] (average 53 MBps) 00:08:14.801 00:08:14.801 00:08:14.801 real 0m1.767s 00:08:14.801 user 0m1.583s 00:08:14.801 sys 0m1.442s 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.802 ************************************ 00:08:14.802 END TEST dd_copy_to_out_bdev 00:08:14.802 ************************************ 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:14.802 ************************************ 00:08:14.802 START TEST dd_offset_magic 00:08:14.802 ************************************ 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:14.802 16:03:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:14.802 [2024-11-19 16:03:21.459473] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:14.802 [2024-11-19 16:03:21.459562] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73975 ] 00:08:14.802 { 00:08:14.802 "subsystems": [ 00:08:14.802 { 00:08:14.802 "subsystem": "bdev", 00:08:14.802 "config": [ 00:08:14.802 { 00:08:14.802 "params": { 00:08:14.802 "trtype": "pcie", 00:08:14.802 "traddr": "0000:00:10.0", 00:08:14.802 "name": "Nvme0" 00:08:14.802 }, 00:08:14.802 "method": "bdev_nvme_attach_controller" 00:08:14.802 }, 00:08:14.802 { 00:08:14.802 "params": { 00:08:14.802 "trtype": "pcie", 00:08:14.802 "traddr": "0000:00:11.0", 00:08:14.802 "name": "Nvme1" 00:08:14.802 }, 00:08:14.802 "method": "bdev_nvme_attach_controller" 00:08:14.802 }, 00:08:14.802 { 00:08:14.802 "method": "bdev_wait_for_examine" 00:08:14.802 } 00:08:14.802 ] 00:08:14.802 } 00:08:14.802 ] 00:08:14.802 } 00:08:15.061 [2024-11-19 16:03:21.606640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.061 [2024-11-19 16:03:21.624213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.061 [2024-11-19 16:03:21.651181] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.320  [2024-11-19T16:03:22.035Z] Copying: 65/65 [MB] (average 984 MBps) 00:08:15.320 00:08:15.579 16:03:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:15.579 16:03:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:15.579 16:03:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:15.579 16:03:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:15.579 [2024-11-19 16:03:22.094192] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:15.579 [2024-11-19 16:03:22.094511] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73990 ] 00:08:15.579 { 00:08:15.579 "subsystems": [ 00:08:15.579 { 00:08:15.579 "subsystem": "bdev", 00:08:15.579 "config": [ 00:08:15.579 { 00:08:15.579 "params": { 00:08:15.579 "trtype": "pcie", 00:08:15.579 "traddr": "0000:00:10.0", 00:08:15.579 "name": "Nvme0" 00:08:15.579 }, 00:08:15.579 "method": "bdev_nvme_attach_controller" 00:08:15.579 }, 00:08:15.579 { 00:08:15.579 "params": { 00:08:15.579 "trtype": "pcie", 00:08:15.579 "traddr": "0000:00:11.0", 00:08:15.579 "name": "Nvme1" 00:08:15.579 }, 00:08:15.579 "method": "bdev_nvme_attach_controller" 00:08:15.579 }, 00:08:15.580 { 00:08:15.580 "method": "bdev_wait_for_examine" 00:08:15.580 } 00:08:15.580 ] 00:08:15.580 } 00:08:15.580 ] 00:08:15.580 } 00:08:15.580 [2024-11-19 16:03:22.238524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.580 [2024-11-19 16:03:22.256072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.580 [2024-11-19 16:03:22.285768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.839  [2024-11-19T16:03:22.554Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:15.839 00:08:15.839 16:03:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:15.839 16:03:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:15.839 16:03:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:15.839 16:03:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:15.839 16:03:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:15.839 16:03:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:15.839 16:03:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:16.098 [2024-11-19 16:03:22.603274] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:16.098 [2024-11-19 16:03:22.603366] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74006 ] 00:08:16.098 { 00:08:16.098 "subsystems": [ 00:08:16.098 { 00:08:16.098 "subsystem": "bdev", 00:08:16.098 "config": [ 00:08:16.098 { 00:08:16.098 "params": { 00:08:16.098 "trtype": "pcie", 00:08:16.098 "traddr": "0000:00:10.0", 00:08:16.098 "name": "Nvme0" 00:08:16.098 }, 00:08:16.098 "method": "bdev_nvme_attach_controller" 00:08:16.098 }, 00:08:16.098 { 00:08:16.098 "params": { 00:08:16.098 "trtype": "pcie", 00:08:16.098 "traddr": "0000:00:11.0", 00:08:16.098 "name": "Nvme1" 00:08:16.098 }, 00:08:16.098 "method": "bdev_nvme_attach_controller" 00:08:16.098 }, 00:08:16.098 { 00:08:16.098 "method": "bdev_wait_for_examine" 00:08:16.098 } 00:08:16.098 ] 00:08:16.098 } 00:08:16.098 ] 00:08:16.098 } 00:08:16.098 [2024-11-19 16:03:22.747109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.098 [2024-11-19 16:03:22.767916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.098 [2024-11-19 16:03:22.796445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.357  [2024-11-19T16:03:23.331Z] Copying: 65/65 [MB] (average 1065 MBps) 00:08:16.616 00:08:16.616 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:16.616 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:16.616 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:16.616 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:16.616 [2024-11-19 16:03:23.225716] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:16.616 [2024-11-19 16:03:23.225991] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74021 ] 00:08:16.616 { 00:08:16.616 "subsystems": [ 00:08:16.616 { 00:08:16.616 "subsystem": "bdev", 00:08:16.616 "config": [ 00:08:16.616 { 00:08:16.616 "params": { 00:08:16.616 "trtype": "pcie", 00:08:16.616 "traddr": "0000:00:10.0", 00:08:16.616 "name": "Nvme0" 00:08:16.616 }, 00:08:16.616 "method": "bdev_nvme_attach_controller" 00:08:16.616 }, 00:08:16.616 { 00:08:16.616 "params": { 00:08:16.616 "trtype": "pcie", 00:08:16.616 "traddr": "0000:00:11.0", 00:08:16.616 "name": "Nvme1" 00:08:16.616 }, 00:08:16.616 "method": "bdev_nvme_attach_controller" 00:08:16.616 }, 00:08:16.616 { 00:08:16.616 "method": "bdev_wait_for_examine" 00:08:16.616 } 00:08:16.616 ] 00:08:16.616 } 00:08:16.616 ] 00:08:16.616 } 00:08:16.875 [2024-11-19 16:03:23.373331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.875 [2024-11-19 16:03:23.391134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.875 [2024-11-19 16:03:23.419678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.875  [2024-11-19T16:03:23.849Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:17.134 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:17.134 ************************************ 00:08:17.134 END TEST dd_offset_magic 00:08:17.134 ************************************ 00:08:17.134 00:08:17.134 real 0m2.279s 00:08:17.134 user 0m1.680s 00:08:17.134 sys 0m0.574s 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:17.134 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:17.135 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:17.135 16:03:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:17.135 [2024-11-19 16:03:23.785664] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:17.135 [2024-11-19 16:03:23.785913] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74052 ] 00:08:17.135 { 00:08:17.135 "subsystems": [ 00:08:17.135 { 00:08:17.135 "subsystem": "bdev", 00:08:17.135 "config": [ 00:08:17.135 { 00:08:17.135 "params": { 00:08:17.135 "trtype": "pcie", 00:08:17.135 "traddr": "0000:00:10.0", 00:08:17.135 "name": "Nvme0" 00:08:17.135 }, 00:08:17.135 "method": "bdev_nvme_attach_controller" 00:08:17.135 }, 00:08:17.135 { 00:08:17.135 "params": { 00:08:17.135 "trtype": "pcie", 00:08:17.135 "traddr": "0000:00:11.0", 00:08:17.135 "name": "Nvme1" 00:08:17.135 }, 00:08:17.135 "method": "bdev_nvme_attach_controller" 00:08:17.135 }, 00:08:17.135 { 00:08:17.135 "method": "bdev_wait_for_examine" 00:08:17.135 } 00:08:17.135 ] 00:08:17.135 } 00:08:17.135 ] 00:08:17.135 } 00:08:17.394 [2024-11-19 16:03:23.934487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.394 [2024-11-19 16:03:23.952739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.394 [2024-11-19 16:03:23.980952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.654  [2024-11-19T16:03:24.369Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:17.654 00:08:17.654 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:17.654 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:17.654 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:17.654 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:17.654 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:17.654 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:17.654 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:17.654 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:17.654 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:17.654 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:17.654 [2024-11-19 16:03:24.305735] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:17.654 [2024-11-19 16:03:24.305823] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74068 ] 00:08:17.654 { 00:08:17.654 "subsystems": [ 00:08:17.654 { 00:08:17.654 "subsystem": "bdev", 00:08:17.654 "config": [ 00:08:17.654 { 00:08:17.654 "params": { 00:08:17.654 "trtype": "pcie", 00:08:17.654 "traddr": "0000:00:10.0", 00:08:17.654 "name": "Nvme0" 00:08:17.654 }, 00:08:17.654 "method": "bdev_nvme_attach_controller" 00:08:17.654 }, 00:08:17.654 { 00:08:17.654 "params": { 00:08:17.654 "trtype": "pcie", 00:08:17.654 "traddr": "0000:00:11.0", 00:08:17.654 "name": "Nvme1" 00:08:17.654 }, 00:08:17.654 "method": "bdev_nvme_attach_controller" 00:08:17.654 }, 00:08:17.654 { 00:08:17.654 "method": "bdev_wait_for_examine" 00:08:17.654 } 00:08:17.654 ] 00:08:17.654 } 00:08:17.654 ] 00:08:17.654 } 00:08:17.914 [2024-11-19 16:03:24.451655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.914 [2024-11-19 16:03:24.470971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.914 [2024-11-19 16:03:24.501883] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.173  [2024-11-19T16:03:24.888Z] Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:18.173 00:08:18.173 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:18.173 ************************************ 00:08:18.173 END TEST spdk_dd_bdev_to_bdev 00:08:18.173 ************************************ 00:08:18.173 00:08:18.173 real 0m5.879s 00:08:18.173 user 0m4.380s 00:08:18.173 sys 0m2.762s 00:08:18.173 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.173 16:03:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:18.173 16:03:24 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:18.173 16:03:24 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:18.173 16:03:24 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.173 16:03:24 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.173 16:03:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:18.173 ************************************ 00:08:18.173 START TEST spdk_dd_uring 00:08:18.173 ************************************ 00:08:18.173 16:03:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:18.434 * Looking for test storage... 00:08:18.434 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:18.434 16:03:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:18.434 16:03:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:08:18.434 16:03:24 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.434 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:18.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.435 --rc genhtml_branch_coverage=1 00:08:18.435 --rc genhtml_function_coverage=1 00:08:18.435 --rc genhtml_legend=1 00:08:18.435 --rc geninfo_all_blocks=1 00:08:18.435 --rc geninfo_unexecuted_blocks=1 00:08:18.435 00:08:18.435 ' 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:18.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.435 --rc genhtml_branch_coverage=1 00:08:18.435 --rc genhtml_function_coverage=1 00:08:18.435 --rc genhtml_legend=1 00:08:18.435 --rc geninfo_all_blocks=1 00:08:18.435 --rc geninfo_unexecuted_blocks=1 00:08:18.435 00:08:18.435 ' 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:18.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.435 --rc genhtml_branch_coverage=1 00:08:18.435 --rc genhtml_function_coverage=1 00:08:18.435 --rc genhtml_legend=1 00:08:18.435 --rc geninfo_all_blocks=1 00:08:18.435 --rc geninfo_unexecuted_blocks=1 00:08:18.435 00:08:18.435 ' 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:18.435 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.435 --rc genhtml_branch_coverage=1 00:08:18.435 --rc genhtml_function_coverage=1 00:08:18.435 --rc genhtml_legend=1 00:08:18.435 --rc geninfo_all_blocks=1 00:08:18.435 --rc geninfo_unexecuted_blocks=1 00:08:18.435 00:08:18.435 ' 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:18.435 ************************************ 00:08:18.435 START TEST dd_uring_copy 00:08:18.435 ************************************ 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=8h4ehgo589fsuwzjkg4dk99gb22rmooazgbcvxon0hn6ackzcoz1u5v5595whvmvc2rylu6fy1w54jikbp3n1he42h6mejhf6p8lvvljbkoqxezp3jlpvceqep6kkjnmtvo0td7i0au33gjg6v0t93ojx25fi7of5aj7ns6oiytff4x0qqjokzo3u6br95scbdp79qlv4htqjvxaigso0nmjkgrexggyxudwspyi2fli4u7hrjegmivjj257wu12a739ltx6jrf04e2b9gx50gk8wyc0zlllscskqjr70lvsbahtskayia5lno2ar3sjshm1iimbgy42ylf23o9v9llerniiiwni4p7mx9g9nbpa94eopwj5thn6k5rpejfkytswmmobghg898pck5ix2mjqz0g0v0rulh2pakeqgcfw1dn8ccvede0mjlv9uohszwi8ksboftbseyp6fluqv0kie4c7uzp6byhyh5njyplgezksnj5tto937erzqmbjqpdnh4cmz0fa3dz75t1zy2e4tkazuhmergnnd6b3kz1tccsy5f4sm25aqv8djwk4u17syljcaorh1uu6qk6udbgb4wo5r2frpngpy1irvwa5mbo9dq6qk8y40scewn5pn10rcb6timfom7w0uh8nqernxsd840e9ibkipl3l2wu24965xdhfzts8prdvfinz8k1hyyonxvenbejklrv45mflu42qazjqvq3e4i6rw2t7htpsb99e6i66h7my27ut1uqpsj8v22zmtdr19bl9jq9pxhwfz5g7ijdavgwong994nbs2t7kv3sj79ec03shyjx134gdiv0ljgyfff56obnvjlreomf9op2jeyp1fph6dzh87l8zx9tm2lthloqdzt0epqfems31rl69iip2x4s1mo9yk3bslb66gw1cpym20puu2i7ozfjaq1qqi6k6sqt8az2pve5oih50z5pgqldpj60rmkv7j3stnjkchycn7w4or889xustxtsqqem2 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 8h4ehgo589fsuwzjkg4dk99gb22rmooazgbcvxon0hn6ackzcoz1u5v5595whvmvc2rylu6fy1w54jikbp3n1he42h6mejhf6p8lvvljbkoqxezp3jlpvceqep6kkjnmtvo0td7i0au33gjg6v0t93ojx25fi7of5aj7ns6oiytff4x0qqjokzo3u6br95scbdp79qlv4htqjvxaigso0nmjkgrexggyxudwspyi2fli4u7hrjegmivjj257wu12a739ltx6jrf04e2b9gx50gk8wyc0zlllscskqjr70lvsbahtskayia5lno2ar3sjshm1iimbgy42ylf23o9v9llerniiiwni4p7mx9g9nbpa94eopwj5thn6k5rpejfkytswmmobghg898pck5ix2mjqz0g0v0rulh2pakeqgcfw1dn8ccvede0mjlv9uohszwi8ksboftbseyp6fluqv0kie4c7uzp6byhyh5njyplgezksnj5tto937erzqmbjqpdnh4cmz0fa3dz75t1zy2e4tkazuhmergnnd6b3kz1tccsy5f4sm25aqv8djwk4u17syljcaorh1uu6qk6udbgb4wo5r2frpngpy1irvwa5mbo9dq6qk8y40scewn5pn10rcb6timfom7w0uh8nqernxsd840e9ibkipl3l2wu24965xdhfzts8prdvfinz8k1hyyonxvenbejklrv45mflu42qazjqvq3e4i6rw2t7htpsb99e6i66h7my27ut1uqpsj8v22zmtdr19bl9jq9pxhwfz5g7ijdavgwong994nbs2t7kv3sj79ec03shyjx134gdiv0ljgyfff56obnvjlreomf9op2jeyp1fph6dzh87l8zx9tm2lthloqdzt0epqfems31rl69iip2x4s1mo9yk3bslb66gw1cpym20puu2i7ozfjaq1qqi6k6sqt8az2pve5oih50z5pgqldpj60rmkv7j3stnjkchycn7w4or889xustxtsqqem2 00:08:18.435 16:03:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:18.435 [2024-11-19 16:03:25.139016] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:18.435 [2024-11-19 16:03:25.139102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74146 ] 00:08:18.706 [2024-11-19 16:03:25.290401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.706 [2024-11-19 16:03:25.308185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.706 [2024-11-19 16:03:25.334329] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.290  [2024-11-19T16:03:26.264Z] Copying: 511/511 [MB] (average 1446 MBps) 00:08:19.549 00:08:19.549 16:03:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:19.549 16:03:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:19.549 16:03:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:19.549 16:03:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:19.549 [2024-11-19 16:03:26.085126] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:19.549 [2024-11-19 16:03:26.085207] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74157 ] 00:08:19.549 { 00:08:19.549 "subsystems": [ 00:08:19.549 { 00:08:19.549 "subsystem": "bdev", 00:08:19.549 "config": [ 00:08:19.549 { 00:08:19.549 "params": { 00:08:19.549 "block_size": 512, 00:08:19.549 "num_blocks": 1048576, 00:08:19.549 "name": "malloc0" 00:08:19.549 }, 00:08:19.549 "method": "bdev_malloc_create" 00:08:19.549 }, 00:08:19.549 { 00:08:19.549 "params": { 00:08:19.549 "filename": "/dev/zram1", 00:08:19.549 "name": "uring0" 00:08:19.549 }, 00:08:19.549 "method": "bdev_uring_create" 00:08:19.549 }, 00:08:19.549 { 00:08:19.549 "method": "bdev_wait_for_examine" 00:08:19.549 } 00:08:19.549 ] 00:08:19.549 } 00:08:19.549 ] 00:08:19.549 } 00:08:19.549 [2024-11-19 16:03:26.228738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.549 [2024-11-19 16:03:26.247837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.809 [2024-11-19 16:03:26.276386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.746  [2024-11-19T16:03:28.398Z] Copying: 245/512 [MB] (245 MBps) [2024-11-19T16:03:28.656Z] Copying: 499/512 [MB] (253 MBps) [2024-11-19T16:03:28.656Z] Copying: 512/512 [MB] (average 249 MBps) 00:08:21.941 00:08:21.942 16:03:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:21.942 16:03:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:21.942 16:03:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:21.942 16:03:28 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:22.201 { 00:08:22.201 "subsystems": [ 00:08:22.201 { 00:08:22.201 "subsystem": "bdev", 00:08:22.201 "config": [ 00:08:22.201 { 00:08:22.201 "params": { 00:08:22.201 "block_size": 512, 00:08:22.201 "num_blocks": 1048576, 00:08:22.201 "name": "malloc0" 00:08:22.201 }, 00:08:22.201 "method": "bdev_malloc_create" 00:08:22.201 }, 00:08:22.201 { 00:08:22.201 "params": { 00:08:22.201 "filename": "/dev/zram1", 00:08:22.201 "name": "uring0" 00:08:22.201 }, 00:08:22.201 "method": "bdev_uring_create" 00:08:22.201 }, 00:08:22.201 { 00:08:22.201 "method": "bdev_wait_for_examine" 00:08:22.201 } 00:08:22.201 ] 00:08:22.201 } 00:08:22.201 ] 00:08:22.201 } 00:08:22.201 [2024-11-19 16:03:28.702938] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:22.201 [2024-11-19 16:03:28.703066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74195 ] 00:08:22.201 [2024-11-19 16:03:28.850177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.201 [2024-11-19 16:03:28.869344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.201 [2024-11-19 16:03:28.896098] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.578  [2024-11-19T16:03:31.230Z] Copying: 197/512 [MB] (197 MBps) [2024-11-19T16:03:31.798Z] Copying: 372/512 [MB] (174 MBps) [2024-11-19T16:03:32.057Z] Copying: 512/512 [MB] (average 188 MBps) 00:08:25.342 00:08:25.342 16:03:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:25.342 16:03:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 8h4ehgo589fsuwzjkg4dk99gb22rmooazgbcvxon0hn6ackzcoz1u5v5595whvmvc2rylu6fy1w54jikbp3n1he42h6mejhf6p8lvvljbkoqxezp3jlpvceqep6kkjnmtvo0td7i0au33gjg6v0t93ojx25fi7of5aj7ns6oiytff4x0qqjokzo3u6br95scbdp79qlv4htqjvxaigso0nmjkgrexggyxudwspyi2fli4u7hrjegmivjj257wu12a739ltx6jrf04e2b9gx50gk8wyc0zlllscskqjr70lvsbahtskayia5lno2ar3sjshm1iimbgy42ylf23o9v9llerniiiwni4p7mx9g9nbpa94eopwj5thn6k5rpejfkytswmmobghg898pck5ix2mjqz0g0v0rulh2pakeqgcfw1dn8ccvede0mjlv9uohszwi8ksboftbseyp6fluqv0kie4c7uzp6byhyh5njyplgezksnj5tto937erzqmbjqpdnh4cmz0fa3dz75t1zy2e4tkazuhmergnnd6b3kz1tccsy5f4sm25aqv8djwk4u17syljcaorh1uu6qk6udbgb4wo5r2frpngpy1irvwa5mbo9dq6qk8y40scewn5pn10rcb6timfom7w0uh8nqernxsd840e9ibkipl3l2wu24965xdhfzts8prdvfinz8k1hyyonxvenbejklrv45mflu42qazjqvq3e4i6rw2t7htpsb99e6i66h7my27ut1uqpsj8v22zmtdr19bl9jq9pxhwfz5g7ijdavgwong994nbs2t7kv3sj79ec03shyjx134gdiv0ljgyfff56obnvjlreomf9op2jeyp1fph6dzh87l8zx9tm2lthloqdzt0epqfems31rl69iip2x4s1mo9yk3bslb66gw1cpym20puu2i7ozfjaq1qqi6k6sqt8az2pve5oih50z5pgqldpj60rmkv7j3stnjkchycn7w4or889xustxtsqqem2 == \8\h\4\e\h\g\o\5\8\9\f\s\u\w\z\j\k\g\4\d\k\9\9\g\b\2\2\r\m\o\o\a\z\g\b\c\v\x\o\n\0\h\n\6\a\c\k\z\c\o\z\1\u\5\v\5\5\9\5\w\h\v\m\v\c\2\r\y\l\u\6\f\y\1\w\5\4\j\i\k\b\p\3\n\1\h\e\4\2\h\6\m\e\j\h\f\6\p\8\l\v\v\l\j\b\k\o\q\x\e\z\p\3\j\l\p\v\c\e\q\e\p\6\k\k\j\n\m\t\v\o\0\t\d\7\i\0\a\u\3\3\g\j\g\6\v\0\t\9\3\o\j\x\2\5\f\i\7\o\f\5\a\j\7\n\s\6\o\i\y\t\f\f\4\x\0\q\q\j\o\k\z\o\3\u\6\b\r\9\5\s\c\b\d\p\7\9\q\l\v\4\h\t\q\j\v\x\a\i\g\s\o\0\n\m\j\k\g\r\e\x\g\g\y\x\u\d\w\s\p\y\i\2\f\l\i\4\u\7\h\r\j\e\g\m\i\v\j\j\2\5\7\w\u\1\2\a\7\3\9\l\t\x\6\j\r\f\0\4\e\2\b\9\g\x\5\0\g\k\8\w\y\c\0\z\l\l\l\s\c\s\k\q\j\r\7\0\l\v\s\b\a\h\t\s\k\a\y\i\a\5\l\n\o\2\a\r\3\s\j\s\h\m\1\i\i\m\b\g\y\4\2\y\l\f\2\3\o\9\v\9\l\l\e\r\n\i\i\i\w\n\i\4\p\7\m\x\9\g\9\n\b\p\a\9\4\e\o\p\w\j\5\t\h\n\6\k\5\r\p\e\j\f\k\y\t\s\w\m\m\o\b\g\h\g\8\9\8\p\c\k\5\i\x\2\m\j\q\z\0\g\0\v\0\r\u\l\h\2\p\a\k\e\q\g\c\f\w\1\d\n\8\c\c\v\e\d\e\0\m\j\l\v\9\u\o\h\s\z\w\i\8\k\s\b\o\f\t\b\s\e\y\p\6\f\l\u\q\v\0\k\i\e\4\c\7\u\z\p\6\b\y\h\y\h\5\n\j\y\p\l\g\e\z\k\s\n\j\5\t\t\o\9\3\7\e\r\z\q\m\b\j\q\p\d\n\h\4\c\m\z\0\f\a\3\d\z\7\5\t\1\z\y\2\e\4\t\k\a\z\u\h\m\e\r\g\n\n\d\6\b\3\k\z\1\t\c\c\s\y\5\f\4\s\m\2\5\a\q\v\8\d\j\w\k\4\u\1\7\s\y\l\j\c\a\o\r\h\1\u\u\6\q\k\6\u\d\b\g\b\4\w\o\5\r\2\f\r\p\n\g\p\y\1\i\r\v\w\a\5\m\b\o\9\d\q\6\q\k\8\y\4\0\s\c\e\w\n\5\p\n\1\0\r\c\b\6\t\i\m\f\o\m\7\w\0\u\h\8\n\q\e\r\n\x\s\d\8\4\0\e\9\i\b\k\i\p\l\3\l\2\w\u\2\4\9\6\5\x\d\h\f\z\t\s\8\p\r\d\v\f\i\n\z\8\k\1\h\y\y\o\n\x\v\e\n\b\e\j\k\l\r\v\4\5\m\f\l\u\4\2\q\a\z\j\q\v\q\3\e\4\i\6\r\w\2\t\7\h\t\p\s\b\9\9\e\6\i\6\6\h\7\m\y\2\7\u\t\1\u\q\p\s\j\8\v\2\2\z\m\t\d\r\1\9\b\l\9\j\q\9\p\x\h\w\f\z\5\g\7\i\j\d\a\v\g\w\o\n\g\9\9\4\n\b\s\2\t\7\k\v\3\s\j\7\9\e\c\0\3\s\h\y\j\x\1\3\4\g\d\i\v\0\l\j\g\y\f\f\f\5\6\o\b\n\v\j\l\r\e\o\m\f\9\o\p\2\j\e\y\p\1\f\p\h\6\d\z\h\8\7\l\8\z\x\9\t\m\2\l\t\h\l\o\q\d\z\t\0\e\p\q\f\e\m\s\3\1\r\l\6\9\i\i\p\2\x\4\s\1\m\o\9\y\k\3\b\s\l\b\6\6\g\w\1\c\p\y\m\2\0\p\u\u\2\i\7\o\z\f\j\a\q\1\q\q\i\6\k\6\s\q\t\8\a\z\2\p\v\e\5\o\i\h\5\0\z\5\p\g\q\l\d\p\j\6\0\r\m\k\v\7\j\3\s\t\n\j\k\c\h\y\c\n\7\w\4\o\r\8\8\9\x\u\s\t\x\t\s\q\q\e\m\2 ]] 00:08:25.342 16:03:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:25.342 16:03:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 8h4ehgo589fsuwzjkg4dk99gb22rmooazgbcvxon0hn6ackzcoz1u5v5595whvmvc2rylu6fy1w54jikbp3n1he42h6mejhf6p8lvvljbkoqxezp3jlpvceqep6kkjnmtvo0td7i0au33gjg6v0t93ojx25fi7of5aj7ns6oiytff4x0qqjokzo3u6br95scbdp79qlv4htqjvxaigso0nmjkgrexggyxudwspyi2fli4u7hrjegmivjj257wu12a739ltx6jrf04e2b9gx50gk8wyc0zlllscskqjr70lvsbahtskayia5lno2ar3sjshm1iimbgy42ylf23o9v9llerniiiwni4p7mx9g9nbpa94eopwj5thn6k5rpejfkytswmmobghg898pck5ix2mjqz0g0v0rulh2pakeqgcfw1dn8ccvede0mjlv9uohszwi8ksboftbseyp6fluqv0kie4c7uzp6byhyh5njyplgezksnj5tto937erzqmbjqpdnh4cmz0fa3dz75t1zy2e4tkazuhmergnnd6b3kz1tccsy5f4sm25aqv8djwk4u17syljcaorh1uu6qk6udbgb4wo5r2frpngpy1irvwa5mbo9dq6qk8y40scewn5pn10rcb6timfom7w0uh8nqernxsd840e9ibkipl3l2wu24965xdhfzts8prdvfinz8k1hyyonxvenbejklrv45mflu42qazjqvq3e4i6rw2t7htpsb99e6i66h7my27ut1uqpsj8v22zmtdr19bl9jq9pxhwfz5g7ijdavgwong994nbs2t7kv3sj79ec03shyjx134gdiv0ljgyfff56obnvjlreomf9op2jeyp1fph6dzh87l8zx9tm2lthloqdzt0epqfems31rl69iip2x4s1mo9yk3bslb66gw1cpym20puu2i7ozfjaq1qqi6k6sqt8az2pve5oih50z5pgqldpj60rmkv7j3stnjkchycn7w4or889xustxtsqqem2 == \8\h\4\e\h\g\o\5\8\9\f\s\u\w\z\j\k\g\4\d\k\9\9\g\b\2\2\r\m\o\o\a\z\g\b\c\v\x\o\n\0\h\n\6\a\c\k\z\c\o\z\1\u\5\v\5\5\9\5\w\h\v\m\v\c\2\r\y\l\u\6\f\y\1\w\5\4\j\i\k\b\p\3\n\1\h\e\4\2\h\6\m\e\j\h\f\6\p\8\l\v\v\l\j\b\k\o\q\x\e\z\p\3\j\l\p\v\c\e\q\e\p\6\k\k\j\n\m\t\v\o\0\t\d\7\i\0\a\u\3\3\g\j\g\6\v\0\t\9\3\o\j\x\2\5\f\i\7\o\f\5\a\j\7\n\s\6\o\i\y\t\f\f\4\x\0\q\q\j\o\k\z\o\3\u\6\b\r\9\5\s\c\b\d\p\7\9\q\l\v\4\h\t\q\j\v\x\a\i\g\s\o\0\n\m\j\k\g\r\e\x\g\g\y\x\u\d\w\s\p\y\i\2\f\l\i\4\u\7\h\r\j\e\g\m\i\v\j\j\2\5\7\w\u\1\2\a\7\3\9\l\t\x\6\j\r\f\0\4\e\2\b\9\g\x\5\0\g\k\8\w\y\c\0\z\l\l\l\s\c\s\k\q\j\r\7\0\l\v\s\b\a\h\t\s\k\a\y\i\a\5\l\n\o\2\a\r\3\s\j\s\h\m\1\i\i\m\b\g\y\4\2\y\l\f\2\3\o\9\v\9\l\l\e\r\n\i\i\i\w\n\i\4\p\7\m\x\9\g\9\n\b\p\a\9\4\e\o\p\w\j\5\t\h\n\6\k\5\r\p\e\j\f\k\y\t\s\w\m\m\o\b\g\h\g\8\9\8\p\c\k\5\i\x\2\m\j\q\z\0\g\0\v\0\r\u\l\h\2\p\a\k\e\q\g\c\f\w\1\d\n\8\c\c\v\e\d\e\0\m\j\l\v\9\u\o\h\s\z\w\i\8\k\s\b\o\f\t\b\s\e\y\p\6\f\l\u\q\v\0\k\i\e\4\c\7\u\z\p\6\b\y\h\y\h\5\n\j\y\p\l\g\e\z\k\s\n\j\5\t\t\o\9\3\7\e\r\z\q\m\b\j\q\p\d\n\h\4\c\m\z\0\f\a\3\d\z\7\5\t\1\z\y\2\e\4\t\k\a\z\u\h\m\e\r\g\n\n\d\6\b\3\k\z\1\t\c\c\s\y\5\f\4\s\m\2\5\a\q\v\8\d\j\w\k\4\u\1\7\s\y\l\j\c\a\o\r\h\1\u\u\6\q\k\6\u\d\b\g\b\4\w\o\5\r\2\f\r\p\n\g\p\y\1\i\r\v\w\a\5\m\b\o\9\d\q\6\q\k\8\y\4\0\s\c\e\w\n\5\p\n\1\0\r\c\b\6\t\i\m\f\o\m\7\w\0\u\h\8\n\q\e\r\n\x\s\d\8\4\0\e\9\i\b\k\i\p\l\3\l\2\w\u\2\4\9\6\5\x\d\h\f\z\t\s\8\p\r\d\v\f\i\n\z\8\k\1\h\y\y\o\n\x\v\e\n\b\e\j\k\l\r\v\4\5\m\f\l\u\4\2\q\a\z\j\q\v\q\3\e\4\i\6\r\w\2\t\7\h\t\p\s\b\9\9\e\6\i\6\6\h\7\m\y\2\7\u\t\1\u\q\p\s\j\8\v\2\2\z\m\t\d\r\1\9\b\l\9\j\q\9\p\x\h\w\f\z\5\g\7\i\j\d\a\v\g\w\o\n\g\9\9\4\n\b\s\2\t\7\k\v\3\s\j\7\9\e\c\0\3\s\h\y\j\x\1\3\4\g\d\i\v\0\l\j\g\y\f\f\f\5\6\o\b\n\v\j\l\r\e\o\m\f\9\o\p\2\j\e\y\p\1\f\p\h\6\d\z\h\8\7\l\8\z\x\9\t\m\2\l\t\h\l\o\q\d\z\t\0\e\p\q\f\e\m\s\3\1\r\l\6\9\i\i\p\2\x\4\s\1\m\o\9\y\k\3\b\s\l\b\6\6\g\w\1\c\p\y\m\2\0\p\u\u\2\i\7\o\z\f\j\a\q\1\q\q\i\6\k\6\s\q\t\8\a\z\2\p\v\e\5\o\i\h\5\0\z\5\p\g\q\l\d\p\j\6\0\r\m\k\v\7\j\3\s\t\n\j\k\c\h\y\c\n\7\w\4\o\r\8\8\9\x\u\s\t\x\t\s\q\q\e\m\2 ]] 00:08:25.342 16:03:31 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:25.601 16:03:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:25.601 16:03:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:25.601 16:03:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:25.601 16:03:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:25.861 { 00:08:25.861 "subsystems": [ 00:08:25.861 { 00:08:25.861 "subsystem": "bdev", 00:08:25.861 "config": [ 00:08:25.861 { 00:08:25.861 "params": { 00:08:25.861 "block_size": 512, 00:08:25.861 "num_blocks": 1048576, 00:08:25.861 "name": "malloc0" 00:08:25.861 }, 00:08:25.861 "method": "bdev_malloc_create" 00:08:25.861 }, 00:08:25.861 { 00:08:25.861 "params": { 00:08:25.861 "filename": "/dev/zram1", 00:08:25.861 "name": "uring0" 00:08:25.861 }, 00:08:25.861 "method": "bdev_uring_create" 00:08:25.861 }, 00:08:25.861 { 00:08:25.861 "method": "bdev_wait_for_examine" 00:08:25.861 } 00:08:25.861 ] 00:08:25.861 } 00:08:25.861 ] 00:08:25.861 } 00:08:25.861 [2024-11-19 16:03:32.348957] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:25.861 [2024-11-19 16:03:32.349062] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74257 ] 00:08:25.861 [2024-11-19 16:03:32.495110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.861 [2024-11-19 16:03:32.517463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.861 [2024-11-19 16:03:32.546626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.238  [2024-11-19T16:03:34.890Z] Copying: 171/512 [MB] (171 MBps) [2024-11-19T16:03:35.828Z] Copying: 343/512 [MB] (171 MBps) [2024-11-19T16:03:35.828Z] Copying: 512/512 [MB] (average 172 MBps) 00:08:29.113 00:08:29.113 16:03:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:29.113 16:03:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:29.373 16:03:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:29.373 16:03:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:29.373 16:03:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:29.373 16:03:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:29.373 16:03:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:29.373 16:03:35 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:29.373 [2024-11-19 16:03:35.885628] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:29.373 [2024-11-19 16:03:35.885722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74302 ] 00:08:29.373 { 00:08:29.373 "subsystems": [ 00:08:29.373 { 00:08:29.373 "subsystem": "bdev", 00:08:29.373 "config": [ 00:08:29.373 { 00:08:29.373 "params": { 00:08:29.373 "block_size": 512, 00:08:29.373 "num_blocks": 1048576, 00:08:29.373 "name": "malloc0" 00:08:29.373 }, 00:08:29.373 "method": "bdev_malloc_create" 00:08:29.373 }, 00:08:29.373 { 00:08:29.373 "params": { 00:08:29.373 "filename": "/dev/zram1", 00:08:29.373 "name": "uring0" 00:08:29.373 }, 00:08:29.373 "method": "bdev_uring_create" 00:08:29.373 }, 00:08:29.373 { 00:08:29.373 "params": { 00:08:29.373 "name": "uring0" 00:08:29.373 }, 00:08:29.373 "method": "bdev_uring_delete" 00:08:29.373 }, 00:08:29.373 { 00:08:29.373 "method": "bdev_wait_for_examine" 00:08:29.373 } 00:08:29.373 ] 00:08:29.373 } 00:08:29.373 ] 00:08:29.373 } 00:08:29.373 [2024-11-19 16:03:36.038241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.373 [2024-11-19 16:03:36.062615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.633 [2024-11-19 16:03:36.093932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.633  [2024-11-19T16:03:36.607Z] Copying: 0/0 [B] (average 0 Bps) 00:08:29.892 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:29.892 16:03:36 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:29.892 { 00:08:29.892 "subsystems": [ 00:08:29.892 { 00:08:29.892 "subsystem": "bdev", 00:08:29.892 "config": [ 00:08:29.892 { 00:08:29.892 "params": { 00:08:29.892 "block_size": 512, 00:08:29.892 "num_blocks": 1048576, 00:08:29.892 "name": "malloc0" 00:08:29.892 }, 00:08:29.892 "method": "bdev_malloc_create" 00:08:29.892 }, 00:08:29.892 { 00:08:29.892 "params": { 00:08:29.892 "filename": "/dev/zram1", 00:08:29.892 "name": "uring0" 00:08:29.892 }, 00:08:29.892 "method": "bdev_uring_create" 00:08:29.892 }, 00:08:29.892 { 00:08:29.892 "params": { 00:08:29.892 "name": "uring0" 00:08:29.892 }, 00:08:29.892 "method": "bdev_uring_delete" 00:08:29.892 }, 00:08:29.892 { 00:08:29.892 "method": "bdev_wait_for_examine" 00:08:29.892 } 00:08:29.892 ] 00:08:29.892 } 00:08:29.892 ] 00:08:29.892 } 00:08:29.892 [2024-11-19 16:03:36.509171] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:29.892 [2024-11-19 16:03:36.509318] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74333 ] 00:08:30.151 [2024-11-19 16:03:36.657898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.151 [2024-11-19 16:03:36.677579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.151 [2024-11-19 16:03:36.704881] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.151 [2024-11-19 16:03:36.818201] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:30.151 [2024-11-19 16:03:36.818305] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:30.151 [2024-11-19 16:03:36.818320] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:30.151 [2024-11-19 16:03:36.818329] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.409 [2024-11-19 16:03:36.974501] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:30.409 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:08:30.409 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.409 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:08:30.409 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:08:30.409 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:08:30.410 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.410 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:30.410 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:30.410 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:30.410 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:30.410 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:30.410 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:30.668 00:08:30.668 real 0m12.165s 00:08:30.668 user 0m8.280s 00:08:30.668 sys 0m10.391s 00:08:30.668 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.668 16:03:37 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:30.668 ************************************ 00:08:30.668 END TEST dd_uring_copy 00:08:30.668 ************************************ 00:08:30.668 00:08:30.669 real 0m12.416s 00:08:30.669 user 0m8.427s 00:08:30.669 sys 0m10.495s 00:08:30.669 ************************************ 00:08:30.669 END TEST spdk_dd_uring 00:08:30.669 ************************************ 00:08:30.669 16:03:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.669 16:03:37 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:30.669 16:03:37 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:30.669 16:03:37 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.669 16:03:37 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.669 16:03:37 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:30.669 ************************************ 00:08:30.669 START TEST spdk_dd_sparse 00:08:30.669 ************************************ 00:08:30.669 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:30.958 * Looking for test storage... 00:08:30.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:30.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.958 --rc genhtml_branch_coverage=1 00:08:30.958 --rc genhtml_function_coverage=1 00:08:30.958 --rc genhtml_legend=1 00:08:30.958 --rc geninfo_all_blocks=1 00:08:30.958 --rc geninfo_unexecuted_blocks=1 00:08:30.958 00:08:30.958 ' 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:30.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.958 --rc genhtml_branch_coverage=1 00:08:30.958 --rc genhtml_function_coverage=1 00:08:30.958 --rc genhtml_legend=1 00:08:30.958 --rc geninfo_all_blocks=1 00:08:30.958 --rc geninfo_unexecuted_blocks=1 00:08:30.958 00:08:30.958 ' 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:30.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.958 --rc genhtml_branch_coverage=1 00:08:30.958 --rc genhtml_function_coverage=1 00:08:30.958 --rc genhtml_legend=1 00:08:30.958 --rc geninfo_all_blocks=1 00:08:30.958 --rc geninfo_unexecuted_blocks=1 00:08:30.958 00:08:30.958 ' 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:30.958 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.958 --rc genhtml_branch_coverage=1 00:08:30.958 --rc genhtml_function_coverage=1 00:08:30.958 --rc genhtml_legend=1 00:08:30.958 --rc geninfo_all_blocks=1 00:08:30.958 --rc geninfo_unexecuted_blocks=1 00:08:30.958 00:08:30.958 ' 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.958 16:03:37 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:30.959 1+0 records in 00:08:30.959 1+0 records out 00:08:30.959 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00650017 s, 645 MB/s 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:30.959 1+0 records in 00:08:30.959 1+0 records out 00:08:30.959 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00584701 s, 717 MB/s 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:30.959 1+0 records in 00:08:30.959 1+0 records out 00:08:30.959 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0063511 s, 660 MB/s 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:30.959 ************************************ 00:08:30.959 START TEST dd_sparse_file_to_file 00:08:30.959 ************************************ 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:30.959 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:30.959 [2024-11-19 16:03:37.585396] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:30.959 [2024-11-19 16:03:37.585520] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74427 ] 00:08:30.959 { 00:08:30.959 "subsystems": [ 00:08:30.959 { 00:08:30.959 "subsystem": "bdev", 00:08:30.959 "config": [ 00:08:30.959 { 00:08:30.959 "params": { 00:08:30.959 "block_size": 4096, 00:08:30.959 "filename": "dd_sparse_aio_disk", 00:08:30.959 "name": "dd_aio" 00:08:30.959 }, 00:08:30.959 "method": "bdev_aio_create" 00:08:30.959 }, 00:08:30.959 { 00:08:30.959 "params": { 00:08:30.959 "lvs_name": "dd_lvstore", 00:08:30.959 "bdev_name": "dd_aio" 00:08:30.959 }, 00:08:30.959 "method": "bdev_lvol_create_lvstore" 00:08:30.959 }, 00:08:30.959 { 00:08:30.959 "method": "bdev_wait_for_examine" 00:08:30.959 } 00:08:30.959 ] 00:08:30.959 } 00:08:30.959 ] 00:08:30.959 } 00:08:31.218 [2024-11-19 16:03:37.727536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.218 [2024-11-19 16:03:37.746981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.218 [2024-11-19 16:03:37.773994] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.218  [2024-11-19T16:03:38.193Z] Copying: 12/36 [MB] (average 923 MBps) 00:08:31.478 00:08:31.478 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:31.478 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:31.478 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:31.478 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:31.478 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:31.478 16:03:37 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:31.478 00:08:31.478 real 0m0.476s 00:08:31.478 user 0m0.280s 00:08:31.478 sys 0m0.238s 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:31.478 ************************************ 00:08:31.478 END TEST dd_sparse_file_to_file 00:08:31.478 ************************************ 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:31.478 ************************************ 00:08:31.478 START TEST dd_sparse_file_to_bdev 00:08:31.478 ************************************ 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:31.478 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:31.478 [2024-11-19 16:03:38.113242] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:31.478 [2024-11-19 16:03:38.113344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74464 ] 00:08:31.478 { 00:08:31.478 "subsystems": [ 00:08:31.478 { 00:08:31.478 "subsystem": "bdev", 00:08:31.478 "config": [ 00:08:31.478 { 00:08:31.478 "params": { 00:08:31.478 "block_size": 4096, 00:08:31.478 "filename": "dd_sparse_aio_disk", 00:08:31.478 "name": "dd_aio" 00:08:31.478 }, 00:08:31.478 "method": "bdev_aio_create" 00:08:31.478 }, 00:08:31.478 { 00:08:31.478 "params": { 00:08:31.478 "lvs_name": "dd_lvstore", 00:08:31.478 "lvol_name": "dd_lvol", 00:08:31.478 "size_in_mib": 36, 00:08:31.478 "thin_provision": true 00:08:31.478 }, 00:08:31.478 "method": "bdev_lvol_create" 00:08:31.478 }, 00:08:31.478 { 00:08:31.478 "method": "bdev_wait_for_examine" 00:08:31.478 } 00:08:31.478 ] 00:08:31.478 } 00:08:31.478 ] 00:08:31.478 } 00:08:31.737 [2024-11-19 16:03:38.252612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.737 [2024-11-19 16:03:38.272309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.737 [2024-11-19 16:03:38.300267] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:31.737  [2024-11-19T16:03:38.712Z] Copying: 12/36 [MB] (average 545 MBps) 00:08:31.997 00:08:31.997 00:08:31.997 real 0m0.452s 00:08:31.997 user 0m0.274s 00:08:31.997 sys 0m0.236s 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.997 ************************************ 00:08:31.997 END TEST dd_sparse_file_to_bdev 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:31.997 ************************************ 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:31.997 ************************************ 00:08:31.997 START TEST dd_sparse_bdev_to_file 00:08:31.997 ************************************ 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:31.997 16:03:38 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:31.997 { 00:08:31.997 "subsystems": [ 00:08:31.997 { 00:08:31.997 "subsystem": "bdev", 00:08:31.997 "config": [ 00:08:31.997 { 00:08:31.997 "params": { 00:08:31.997 "block_size": 4096, 00:08:31.997 "filename": "dd_sparse_aio_disk", 00:08:31.997 "name": "dd_aio" 00:08:31.997 }, 00:08:31.997 "method": "bdev_aio_create" 00:08:31.997 }, 00:08:31.997 { 00:08:31.997 "method": "bdev_wait_for_examine" 00:08:31.997 } 00:08:31.997 ] 00:08:31.997 } 00:08:31.997 ] 00:08:31.997 } 00:08:31.997 [2024-11-19 16:03:38.630676] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:31.997 [2024-11-19 16:03:38.630773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74502 ] 00:08:32.256 [2024-11-19 16:03:38.775291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.256 [2024-11-19 16:03:38.795173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.256 [2024-11-19 16:03:38.824406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.256  [2024-11-19T16:03:39.230Z] Copying: 12/36 [MB] (average 923 MBps) 00:08:32.515 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:32.515 00:08:32.515 real 0m0.471s 00:08:32.515 user 0m0.279s 00:08:32.515 sys 0m0.245s 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:32.515 ************************************ 00:08:32.515 END TEST dd_sparse_bdev_to_file 00:08:32.515 ************************************ 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:32.515 00:08:32.515 real 0m1.792s 00:08:32.515 user 0m1.006s 00:08:32.515 sys 0m0.935s 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.515 ************************************ 00:08:32.515 END TEST spdk_dd_sparse 00:08:32.515 ************************************ 00:08:32.515 16:03:39 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:32.515 16:03:39 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:32.515 16:03:39 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.515 16:03:39 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.515 16:03:39 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:32.516 ************************************ 00:08:32.516 START TEST spdk_dd_negative 00:08:32.516 ************************************ 00:08:32.516 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:32.775 * Looking for test storage... 00:08:32.775 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:32.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.775 --rc genhtml_branch_coverage=1 00:08:32.775 --rc genhtml_function_coverage=1 00:08:32.775 --rc genhtml_legend=1 00:08:32.775 --rc geninfo_all_blocks=1 00:08:32.775 --rc geninfo_unexecuted_blocks=1 00:08:32.775 00:08:32.775 ' 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:32.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.775 --rc genhtml_branch_coverage=1 00:08:32.775 --rc genhtml_function_coverage=1 00:08:32.775 --rc genhtml_legend=1 00:08:32.775 --rc geninfo_all_blocks=1 00:08:32.775 --rc geninfo_unexecuted_blocks=1 00:08:32.775 00:08:32.775 ' 00:08:32.775 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:32.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.776 --rc genhtml_branch_coverage=1 00:08:32.776 --rc genhtml_function_coverage=1 00:08:32.776 --rc genhtml_legend=1 00:08:32.776 --rc geninfo_all_blocks=1 00:08:32.776 --rc geninfo_unexecuted_blocks=1 00:08:32.776 00:08:32.776 ' 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:32.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.776 --rc genhtml_branch_coverage=1 00:08:32.776 --rc genhtml_function_coverage=1 00:08:32.776 --rc genhtml_legend=1 00:08:32.776 --rc geninfo_all_blocks=1 00:08:32.776 --rc geninfo_unexecuted_blocks=1 00:08:32.776 00:08:32.776 ' 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.776 ************************************ 00:08:32.776 START TEST dd_invalid_arguments 00:08:32.776 ************************************ 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.776 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:32.776 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:32.776 00:08:32.776 CPU options: 00:08:32.776 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:32.776 (like [0,1,10]) 00:08:32.776 --lcores lcore to CPU mapping list. The list is in the format: 00:08:32.776 [<,lcores[@CPUs]>...] 00:08:32.776 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:32.776 Within the group, '-' is used for range separator, 00:08:32.776 ',' is used for single number separator. 00:08:32.776 '( )' can be omitted for single element group, 00:08:32.776 '@' can be omitted if cpus and lcores have the same value 00:08:32.776 --disable-cpumask-locks Disable CPU core lock files. 00:08:32.776 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:32.776 pollers in the app support interrupt mode) 00:08:32.776 -p, --main-core main (primary) core for DPDK 00:08:32.776 00:08:32.776 Configuration options: 00:08:32.776 -c, --config, --json JSON config file 00:08:32.776 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:32.776 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:32.776 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:32.776 --rpcs-allowed comma-separated list of permitted RPCS 00:08:32.776 --json-ignore-init-errors don't exit on invalid config entry 00:08:32.776 00:08:32.776 Memory options: 00:08:32.776 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:32.776 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:32.776 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:32.776 -R, --huge-unlink unlink huge files after initialization 00:08:32.776 -n, --mem-channels number of memory channels used for DPDK 00:08:32.776 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:32.776 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:32.776 --no-huge run without using hugepages 00:08:32.776 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:32.776 -i, --shm-id shared memory ID (optional) 00:08:32.776 -g, --single-file-segments force creating just one hugetlbfs file 00:08:32.776 00:08:32.776 PCI options: 00:08:32.776 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:32.776 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:32.776 -u, --no-pci disable PCI access 00:08:32.776 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:32.776 00:08:32.776 Log options: 00:08:32.777 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:32.777 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:32.777 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:32.777 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:32.777 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:08:32.777 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:08:32.777 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:08:32.777 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:08:32.777 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:32.777 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:08:32.777 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:08:32.777 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:08:32.777 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:32.777 --silence-noticelog disable notice level logging to stderr 00:08:32.777 00:08:32.777 Trace options: 00:08:32.777 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:32.777 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:32.777 [2024-11-19 16:03:39.412690] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:32.777 setting 0 to disable trace (default 32768) 00:08:32.777 Tracepoints vary in size and can use more than one trace entry. 00:08:32.777 -e, --tpoint-group [:] 00:08:32.777 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:08:32.777 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:08:32.777 blob, bdev_raid, scheduler, all). 00:08:32.777 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:32.777 a tracepoint group. First tpoint inside a group can be enabled by 00:08:32.777 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:32.777 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:32.777 in /include/spdk_internal/trace_defs.h 00:08:32.777 00:08:32.777 Other options: 00:08:32.777 -h, --help show this usage 00:08:32.777 -v, --version print SPDK version 00:08:32.777 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:32.777 --env-context Opaque context for use of the env implementation 00:08:32.777 00:08:32.777 Application specific: 00:08:32.777 [--------- DD Options ---------] 00:08:32.777 --if Input file. Must specify either --if or --ib. 00:08:32.777 --ib Input bdev. Must specifier either --if or --ib 00:08:32.777 --of Output file. Must specify either --of or --ob. 00:08:32.777 --ob Output bdev. Must specify either --of or --ob. 00:08:32.777 --iflag Input file flags. 00:08:32.777 --oflag Output file flags. 00:08:32.777 --bs I/O unit size (default: 4096) 00:08:32.777 --qd Queue depth (default: 2) 00:08:32.777 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:32.777 --skip Skip this many I/O units at start of input. (default: 0) 00:08:32.777 --seek Skip this many I/O units at start of output. (default: 0) 00:08:32.777 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:32.777 --sparse Enable hole skipping in input target 00:08:32.777 Available iflag and oflag values: 00:08:32.777 append - append mode 00:08:32.777 direct - use direct I/O for data 00:08:32.777 directory - fail unless a directory 00:08:32.777 dsync - use synchronized I/O for data 00:08:32.777 noatime - do not update access time 00:08:32.777 noctty - do not assign controlling terminal from file 00:08:32.777 nofollow - do not follow symlinks 00:08:32.777 nonblock - use non-blocking I/O 00:08:32.777 sync - use synchronized I/O for data and metadata 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:32.777 00:08:32.777 real 0m0.063s 00:08:32.777 user 0m0.033s 00:08:32.777 sys 0m0.029s 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.777 ************************************ 00:08:32.777 END TEST dd_invalid_arguments 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:32.777 ************************************ 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:32.777 ************************************ 00:08:32.777 START TEST dd_double_input 00:08:32.777 ************************************ 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:32.777 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:33.037 [2024-11-19 16:03:39.541207] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.037 00:08:33.037 real 0m0.082s 00:08:33.037 user 0m0.053s 00:08:33.037 sys 0m0.028s 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.037 ************************************ 00:08:33.037 END TEST dd_double_input 00:08:33.037 ************************************ 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.037 ************************************ 00:08:33.037 START TEST dd_double_output 00:08:33.037 ************************************ 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:33.037 [2024-11-19 16:03:39.665019] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.037 00:08:33.037 real 0m0.068s 00:08:33.037 user 0m0.037s 00:08:33.037 sys 0m0.030s 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.037 ************************************ 00:08:33.037 END TEST dd_double_output 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:33.037 ************************************ 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.037 ************************************ 00:08:33.037 START TEST dd_no_input 00:08:33.037 ************************************ 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.037 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.038 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.038 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.038 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:33.297 [2024-11-19 16:03:39.786099] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:33.297 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:08:33.297 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.297 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.297 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.297 00:08:33.297 real 0m0.063s 00:08:33.298 user 0m0.035s 00:08:33.298 sys 0m0.028s 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:33.298 ************************************ 00:08:33.298 END TEST dd_no_input 00:08:33.298 ************************************ 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.298 ************************************ 00:08:33.298 START TEST dd_no_output 00:08:33.298 ************************************ 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:33.298 [2024-11-19 16:03:39.908457] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.298 00:08:33.298 real 0m0.067s 00:08:33.298 user 0m0.042s 00:08:33.298 sys 0m0.024s 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.298 ************************************ 00:08:33.298 END TEST dd_no_output 00:08:33.298 ************************************ 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.298 ************************************ 00:08:33.298 START TEST dd_wrong_blocksize 00:08:33.298 ************************************ 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.298 16:03:39 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:33.557 [2024-11-19 16:03:40.039175] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.558 00:08:33.558 real 0m0.081s 00:08:33.558 user 0m0.056s 00:08:33.558 sys 0m0.024s 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.558 ************************************ 00:08:33.558 END TEST dd_wrong_blocksize 00:08:33.558 ************************************ 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:33.558 ************************************ 00:08:33.558 START TEST dd_smaller_blocksize 00:08:33.558 ************************************ 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.558 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:33.558 [2024-11-19 16:03:40.175547] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:33.558 [2024-11-19 16:03:40.175649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74723 ] 00:08:33.816 [2024-11-19 16:03:40.328170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.816 [2024-11-19 16:03:40.353568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.816 [2024-11-19 16:03:40.388793] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.816 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:33.816 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:33.816 [2024-11-19 16:03:40.407701] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:33.816 [2024-11-19 16:03:40.407736] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:33.816 [2024-11-19 16:03:40.479115] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:34.075 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:08:34.075 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.075 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:08:34.075 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:08:34.075 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:08:34.075 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.075 00:08:34.075 real 0m0.424s 00:08:34.075 user 0m0.211s 00:08:34.075 sys 0m0.107s 00:08:34.075 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.075 ************************************ 00:08:34.075 END TEST dd_smaller_blocksize 00:08:34.076 ************************************ 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.076 ************************************ 00:08:34.076 START TEST dd_invalid_count 00:08:34.076 ************************************ 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:34.076 [2024-11-19 16:03:40.694011] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.076 00:08:34.076 real 0m0.127s 00:08:34.076 user 0m0.086s 00:08:34.076 sys 0m0.038s 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.076 ************************************ 00:08:34.076 END TEST dd_invalid_count 00:08:34.076 ************************************ 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.076 ************************************ 00:08:34.076 START TEST dd_invalid_oflag 00:08:34.076 ************************************ 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.076 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.335 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.335 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.335 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.335 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:34.335 [2024-11-19 16:03:40.847587] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:34.335 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:08:34.335 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.335 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.335 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.335 00:08:34.335 real 0m0.086s 00:08:34.335 user 0m0.055s 00:08:34.335 sys 0m0.029s 00:08:34.335 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.335 ************************************ 00:08:34.335 END TEST dd_invalid_oflag 00:08:34.335 ************************************ 00:08:34.335 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:34.335 16:03:40 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.336 ************************************ 00:08:34.336 START TEST dd_invalid_iflag 00:08:34.336 ************************************ 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.336 16:03:40 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:34.336 [2024-11-19 16:03:40.990204] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:34.336 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:08:34.336 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.336 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:34.336 ************************************ 00:08:34.336 END TEST dd_invalid_iflag 00:08:34.336 ************************************ 00:08:34.336 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.336 00:08:34.336 real 0m0.083s 00:08:34.336 user 0m0.054s 00:08:34.336 sys 0m0.028s 00:08:34.336 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.336 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.595 ************************************ 00:08:34.595 START TEST dd_unknown_flag 00:08:34.595 ************************************ 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.595 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.596 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.596 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:34.596 [2024-11-19 16:03:41.117540] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:34.596 [2024-11-19 16:03:41.117641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74815 ] 00:08:34.596 [2024-11-19 16:03:41.250166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.596 [2024-11-19 16:03:41.269952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.596 [2024-11-19 16:03:41.298105] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.855 [2024-11-19 16:03:41.314088] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:34.855 [2024-11-19 16:03:41.314165] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.855 [2024-11-19 16:03:41.314215] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:34.855 [2024-11-19 16:03:41.314227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.855 [2024-11-19 16:03:41.314506] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:34.855 [2024-11-19 16:03:41.314534] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.855 [2024-11-19 16:03:41.314600] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:34.855 [2024-11-19 16:03:41.314611] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:34.855 [2024-11-19 16:03:41.380633] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.855 00:08:34.855 real 0m0.366s 00:08:34.855 user 0m0.174s 00:08:34.855 sys 0m0.099s 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.855 ************************************ 00:08:34.855 END TEST dd_unknown_flag 00:08:34.855 ************************************ 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:34.855 ************************************ 00:08:34.855 START TEST dd_invalid_json 00:08:34.855 ************************************ 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:34.855 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:34.855 [2024-11-19 16:03:41.555113] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:34.855 [2024-11-19 16:03:41.555688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74849 ] 00:08:35.114 [2024-11-19 16:03:41.704838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.114 [2024-11-19 16:03:41.726134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.114 [2024-11-19 16:03:41.726233] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:35.114 [2024-11-19 16:03:41.726281] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:35.114 [2024-11-19 16:03:41.726290] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.114 [2024-11-19 16:03:41.726375] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.114 00:08:35.114 real 0m0.288s 00:08:35.114 user 0m0.131s 00:08:35.114 sys 0m0.054s 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.114 ************************************ 00:08:35.114 END TEST dd_invalid_json 00:08:35.114 ************************************ 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.114 16:03:41 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:35.374 ************************************ 00:08:35.374 START TEST dd_invalid_seek 00:08:35.374 ************************************ 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.374 16:03:41 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:35.374 [2024-11-19 16:03:41.897946] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:35.374 [2024-11-19 16:03:41.898049] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74873 ] 00:08:35.374 { 00:08:35.374 "subsystems": [ 00:08:35.374 { 00:08:35.374 "subsystem": "bdev", 00:08:35.374 "config": [ 00:08:35.374 { 00:08:35.374 "params": { 00:08:35.374 "block_size": 512, 00:08:35.374 "num_blocks": 512, 00:08:35.374 "name": "malloc0" 00:08:35.374 }, 00:08:35.374 "method": "bdev_malloc_create" 00:08:35.374 }, 00:08:35.374 { 00:08:35.374 "params": { 00:08:35.374 "block_size": 512, 00:08:35.374 "num_blocks": 512, 00:08:35.374 "name": "malloc1" 00:08:35.374 }, 00:08:35.374 "method": "bdev_malloc_create" 00:08:35.374 }, 00:08:35.374 { 00:08:35.374 "method": "bdev_wait_for_examine" 00:08:35.374 } 00:08:35.374 ] 00:08:35.374 } 00:08:35.374 ] 00:08:35.374 } 00:08:35.374 [2024-11-19 16:03:42.043587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.374 [2024-11-19 16:03:42.062592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.634 [2024-11-19 16:03:42.092587] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.634 [2024-11-19 16:03:42.133945] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:35.634 [2024-11-19 16:03:42.134016] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.634 [2024-11-19 16:03:42.190853] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:35.634 ************************************ 00:08:35.634 END TEST dd_invalid_seek 00:08:35.634 ************************************ 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:35.634 00:08:35.634 real 0m0.404s 00:08:35.634 user 0m0.258s 00:08:35.634 sys 0m0.108s 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:35.634 ************************************ 00:08:35.634 START TEST dd_invalid_skip 00:08:35.634 ************************************ 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.634 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:35.893 { 00:08:35.893 "subsystems": [ 00:08:35.893 { 00:08:35.893 "subsystem": "bdev", 00:08:35.893 "config": [ 00:08:35.893 { 00:08:35.893 "params": { 00:08:35.893 "block_size": 512, 00:08:35.893 "num_blocks": 512, 00:08:35.893 "name": "malloc0" 00:08:35.893 }, 00:08:35.893 "method": "bdev_malloc_create" 00:08:35.893 }, 00:08:35.893 { 00:08:35.893 "params": { 00:08:35.893 "block_size": 512, 00:08:35.893 "num_blocks": 512, 00:08:35.893 "name": "malloc1" 00:08:35.893 }, 00:08:35.893 "method": "bdev_malloc_create" 00:08:35.893 }, 00:08:35.893 { 00:08:35.893 "method": "bdev_wait_for_examine" 00:08:35.893 } 00:08:35.893 ] 00:08:35.893 } 00:08:35.893 ] 00:08:35.893 } 00:08:35.893 [2024-11-19 16:03:42.357456] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:35.893 [2024-11-19 16:03:42.357571] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74901 ] 00:08:35.893 [2024-11-19 16:03:42.507119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.893 [2024-11-19 16:03:42.526573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.893 [2024-11-19 16:03:42.554626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.893 [2024-11-19 16:03:42.596015] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:35.893 [2024-11-19 16:03:42.596133] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.165 [2024-11-19 16:03:42.663540] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.165 00:08:36.165 real 0m0.422s 00:08:36.165 user 0m0.274s 00:08:36.165 sys 0m0.107s 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.165 ************************************ 00:08:36.165 END TEST dd_invalid_skip 00:08:36.165 ************************************ 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:36.165 ************************************ 00:08:36.165 START TEST dd_invalid_input_count 00:08:36.165 ************************************ 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.165 16:03:42 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:36.165 { 00:08:36.165 "subsystems": [ 00:08:36.165 { 00:08:36.165 "subsystem": "bdev", 00:08:36.165 "config": [ 00:08:36.165 { 00:08:36.165 "params": { 00:08:36.165 "block_size": 512, 00:08:36.165 "num_blocks": 512, 00:08:36.165 "name": "malloc0" 00:08:36.165 }, 00:08:36.165 "method": "bdev_malloc_create" 00:08:36.165 }, 00:08:36.165 { 00:08:36.165 "params": { 00:08:36.165 "block_size": 512, 00:08:36.165 "num_blocks": 512, 00:08:36.165 "name": "malloc1" 00:08:36.165 }, 00:08:36.165 "method": "bdev_malloc_create" 00:08:36.165 }, 00:08:36.165 { 00:08:36.165 "method": "bdev_wait_for_examine" 00:08:36.165 } 00:08:36.165 ] 00:08:36.165 } 00:08:36.165 ] 00:08:36.165 } 00:08:36.165 [2024-11-19 16:03:42.835885] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:36.165 [2024-11-19 16:03:42.836011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74940 ] 00:08:36.424 [2024-11-19 16:03:42.982864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.424 [2024-11-19 16:03:43.004662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.424 [2024-11-19 16:03:43.034177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.424 [2024-11-19 16:03:43.075655] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:36.424 [2024-11-19 16:03:43.075764] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.683 [2024-11-19 16:03:43.141827] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:36.683 00:08:36.683 real 0m0.423s 00:08:36.683 user 0m0.259s 00:08:36.683 sys 0m0.116s 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:36.683 ************************************ 00:08:36.683 END TEST dd_invalid_input_count 00:08:36.683 ************************************ 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:36.683 ************************************ 00:08:36.683 START TEST dd_invalid_output_count 00:08:36.683 ************************************ 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:36.683 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:36.683 { 00:08:36.683 "subsystems": [ 00:08:36.683 { 00:08:36.683 "subsystem": "bdev", 00:08:36.683 "config": [ 00:08:36.683 { 00:08:36.683 "params": { 00:08:36.683 "block_size": 512, 00:08:36.683 "num_blocks": 512, 00:08:36.683 "name": "malloc0" 00:08:36.683 }, 00:08:36.683 "method": "bdev_malloc_create" 00:08:36.683 }, 00:08:36.683 { 00:08:36.683 "method": "bdev_wait_for_examine" 00:08:36.683 } 00:08:36.683 ] 00:08:36.683 } 00:08:36.683 ] 00:08:36.683 } 00:08:36.683 [2024-11-19 16:03:43.312879] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:36.683 [2024-11-19 16:03:43.312984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74968 ] 00:08:36.942 [2024-11-19 16:03:43.459262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.942 [2024-11-19 16:03:43.482024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.942 [2024-11-19 16:03:43.511561] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:36.942 [2024-11-19 16:03:43.545808] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:36.942 [2024-11-19 16:03:43.545900] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:36.943 [2024-11-19 16:03:43.607255] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:37.202 00:08:37.202 real 0m0.411s 00:08:37.202 user 0m0.266s 00:08:37.202 sys 0m0.099s 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:37.202 ************************************ 00:08:37.202 END TEST dd_invalid_output_count 00:08:37.202 ************************************ 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:37.202 ************************************ 00:08:37.202 START TEST dd_bs_not_multiple 00:08:37.202 ************************************ 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:37.202 16:03:43 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:37.202 [2024-11-19 16:03:43.775609] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:37.202 [2024-11-19 16:03:43.775729] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75005 ] 00:08:37.202 { 00:08:37.202 "subsystems": [ 00:08:37.202 { 00:08:37.202 "subsystem": "bdev", 00:08:37.202 "config": [ 00:08:37.202 { 00:08:37.202 "params": { 00:08:37.202 "block_size": 512, 00:08:37.202 "num_blocks": 512, 00:08:37.202 "name": "malloc0" 00:08:37.202 }, 00:08:37.203 "method": "bdev_malloc_create" 00:08:37.203 }, 00:08:37.203 { 00:08:37.203 "params": { 00:08:37.203 "block_size": 512, 00:08:37.203 "num_blocks": 512, 00:08:37.203 "name": "malloc1" 00:08:37.203 }, 00:08:37.203 "method": "bdev_malloc_create" 00:08:37.203 }, 00:08:37.203 { 00:08:37.203 "method": "bdev_wait_for_examine" 00:08:37.203 } 00:08:37.203 ] 00:08:37.203 } 00:08:37.203 ] 00:08:37.203 } 00:08:37.462 [2024-11-19 16:03:43.916108] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.462 [2024-11-19 16:03:43.935963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.462 [2024-11-19 16:03:43.964578] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.462 [2024-11-19 16:03:44.005505] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:37.462 [2024-11-19 16:03:44.005603] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:37.462 [2024-11-19 16:03:44.067887] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:37.462 16:03:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:08:37.462 16:03:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:37.462 16:03:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:08:37.462 16:03:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:08:37.462 16:03:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:08:37.463 16:03:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:37.463 00:08:37.463 real 0m0.400s 00:08:37.463 user 0m0.261s 00:08:37.463 sys 0m0.103s 00:08:37.463 16:03:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.463 16:03:44 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:37.463 ************************************ 00:08:37.463 END TEST dd_bs_not_multiple 00:08:37.463 ************************************ 00:08:37.463 00:08:37.463 real 0m5.005s 00:08:37.463 user 0m2.701s 00:08:37.463 sys 0m1.688s 00:08:37.463 16:03:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.463 16:03:44 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:37.463 ************************************ 00:08:37.463 END TEST spdk_dd_negative 00:08:37.463 ************************************ 00:08:37.722 ************************************ 00:08:37.722 END TEST spdk_dd 00:08:37.722 ************************************ 00:08:37.722 00:08:37.722 real 1m0.894s 00:08:37.722 user 0m38.202s 00:08:37.722 sys 0m25.586s 00:08:37.722 16:03:44 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.722 16:03:44 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:37.722 16:03:44 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:37.722 16:03:44 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:37.722 16:03:44 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:37.722 16:03:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:37.722 16:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:37.722 16:03:44 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:37.722 16:03:44 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:37.722 16:03:44 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:37.722 16:03:44 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:37.722 16:03:44 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:37.722 16:03:44 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:37.722 16:03:44 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:37.722 16:03:44 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.722 16:03:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.722 16:03:44 -- common/autotest_common.sh@10 -- # set +x 00:08:37.722 ************************************ 00:08:37.722 START TEST nvmf_tcp 00:08:37.722 ************************************ 00:08:37.722 16:03:44 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:37.722 * Looking for test storage... 00:08:37.722 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:37.722 16:03:44 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.722 16:03:44 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.722 16:03:44 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.992 16:03:44 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.992 16:03:44 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:37.992 16:03:44 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:37.993 16:03:44 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:37.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.993 --rc genhtml_branch_coverage=1 00:08:37.993 --rc genhtml_function_coverage=1 00:08:37.993 --rc genhtml_legend=1 00:08:37.993 --rc geninfo_all_blocks=1 00:08:37.993 --rc geninfo_unexecuted_blocks=1 00:08:37.993 00:08:37.993 ' 00:08:37.993 16:03:44 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:37.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.993 --rc genhtml_branch_coverage=1 00:08:37.993 --rc genhtml_function_coverage=1 00:08:37.993 --rc genhtml_legend=1 00:08:37.993 --rc geninfo_all_blocks=1 00:08:37.993 --rc geninfo_unexecuted_blocks=1 00:08:37.993 00:08:37.993 ' 00:08:37.993 16:03:44 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:37.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.993 --rc genhtml_branch_coverage=1 00:08:37.993 --rc genhtml_function_coverage=1 00:08:37.993 --rc genhtml_legend=1 00:08:37.993 --rc geninfo_all_blocks=1 00:08:37.993 --rc geninfo_unexecuted_blocks=1 00:08:37.993 00:08:37.993 ' 00:08:37.993 16:03:44 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:37.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:37.993 --rc genhtml_branch_coverage=1 00:08:37.993 --rc genhtml_function_coverage=1 00:08:37.993 --rc genhtml_legend=1 00:08:37.993 --rc geninfo_all_blocks=1 00:08:37.993 --rc geninfo_unexecuted_blocks=1 00:08:37.993 00:08:37.993 ' 00:08:37.993 16:03:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:37.993 16:03:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:37.993 16:03:44 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:37.993 16:03:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:37.993 16:03:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.993 16:03:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:37.993 ************************************ 00:08:37.993 START TEST nvmf_target_core 00:08:37.993 ************************************ 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:37.993 * Looking for test storage... 00:08:37.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:37.993 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.280 --rc genhtml_branch_coverage=1 00:08:38.280 --rc genhtml_function_coverage=1 00:08:38.280 --rc genhtml_legend=1 00:08:38.280 --rc geninfo_all_blocks=1 00:08:38.280 --rc geninfo_unexecuted_blocks=1 00:08:38.280 00:08:38.280 ' 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.280 --rc genhtml_branch_coverage=1 00:08:38.280 --rc genhtml_function_coverage=1 00:08:38.280 --rc genhtml_legend=1 00:08:38.280 --rc geninfo_all_blocks=1 00:08:38.280 --rc geninfo_unexecuted_blocks=1 00:08:38.280 00:08:38.280 ' 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.280 --rc genhtml_branch_coverage=1 00:08:38.280 --rc genhtml_function_coverage=1 00:08:38.280 --rc genhtml_legend=1 00:08:38.280 --rc geninfo_all_blocks=1 00:08:38.280 --rc geninfo_unexecuted_blocks=1 00:08:38.280 00:08:38.280 ' 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.280 --rc genhtml_branch_coverage=1 00:08:38.280 --rc genhtml_function_coverage=1 00:08:38.280 --rc genhtml_legend=1 00:08:38.280 --rc geninfo_all_blocks=1 00:08:38.280 --rc geninfo_unexecuted_blocks=1 00:08:38.280 00:08:38.280 ' 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.280 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:38.280 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.281 ************************************ 00:08:38.281 START TEST nvmf_host_management 00:08:38.281 ************************************ 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:38.281 * Looking for test storage... 00:08:38.281 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:38.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.281 --rc genhtml_branch_coverage=1 00:08:38.281 --rc genhtml_function_coverage=1 00:08:38.281 --rc genhtml_legend=1 00:08:38.281 --rc geninfo_all_blocks=1 00:08:38.281 --rc geninfo_unexecuted_blocks=1 00:08:38.281 00:08:38.281 ' 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:38.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.281 --rc genhtml_branch_coverage=1 00:08:38.281 --rc genhtml_function_coverage=1 00:08:38.281 --rc genhtml_legend=1 00:08:38.281 --rc geninfo_all_blocks=1 00:08:38.281 --rc geninfo_unexecuted_blocks=1 00:08:38.281 00:08:38.281 ' 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:38.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.281 --rc genhtml_branch_coverage=1 00:08:38.281 --rc genhtml_function_coverage=1 00:08:38.281 --rc genhtml_legend=1 00:08:38.281 --rc geninfo_all_blocks=1 00:08:38.281 --rc geninfo_unexecuted_blocks=1 00:08:38.281 00:08:38.281 ' 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:38.281 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.281 --rc genhtml_branch_coverage=1 00:08:38.281 --rc genhtml_function_coverage=1 00:08:38.281 --rc genhtml_legend=1 00:08:38.281 --rc geninfo_all_blocks=1 00:08:38.281 --rc geninfo_unexecuted_blocks=1 00:08:38.281 00:08:38.281 ' 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:38.281 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:38.282 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:38.282 Cannot find device "nvmf_init_br" 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:38.282 Cannot find device "nvmf_init_br2" 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:38.282 Cannot find device "nvmf_tgt_br" 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:38.282 Cannot find device "nvmf_tgt_br2" 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:38.282 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:38.541 Cannot find device "nvmf_init_br" 00:08:38.541 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:38.542 16:03:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:38.542 Cannot find device "nvmf_init_br2" 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:38.542 Cannot find device "nvmf_tgt_br" 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:38.542 Cannot find device "nvmf_tgt_br2" 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:38.542 Cannot find device "nvmf_br" 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:38.542 Cannot find device "nvmf_init_if" 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:38.542 Cannot find device "nvmf_init_if2" 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:38.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:38.542 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:38.542 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:38.801 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:38.801 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.099 ms 00:08:38.801 00:08:38.801 --- 10.0.0.3 ping statistics --- 00:08:38.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.801 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:38.801 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:38.801 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.100 ms 00:08:38.801 00:08:38.801 --- 10.0.0.4 ping statistics --- 00:08:38.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.801 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:38.801 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:38.801 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:08:38.801 00:08:38.801 --- 10.0.0.1 ping statistics --- 00:08:38.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.801 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:38.801 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:38.801 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:08:38.801 00:08:38.801 --- 10.0.0.2 ping statistics --- 00:08:38.801 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:38.801 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=75340 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 75340 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 75340 ']' 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.801 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.060 [2024-11-19 16:03:45.536035] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:39.060 [2024-11-19 16:03:45.536137] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.060 [2024-11-19 16:03:45.692881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:39.060 [2024-11-19 16:03:45.721407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.060 [2024-11-19 16:03:45.721474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.060 [2024-11-19 16:03:45.721488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.060 [2024-11-19 16:03:45.721498] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.060 [2024-11-19 16:03:45.721507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.060 [2024-11-19 16:03:45.722740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.060 [2024-11-19 16:03:45.722907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.060 [2024-11-19 16:03:45.723071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:39.060 [2024-11-19 16:03:45.723078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.060 [2024-11-19 16:03:45.758861] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.319 [2024-11-19 16:03:45.855336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.319 Malloc0 00:08:39.319 [2024-11-19 16:03:45.925288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=75392 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 75392 /var/tmp/bdevperf.sock 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 75392 ']' 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:39.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.319 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:39.320 { 00:08:39.320 "params": { 00:08:39.320 "name": "Nvme$subsystem", 00:08:39.320 "trtype": "$TEST_TRANSPORT", 00:08:39.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:39.320 "adrfam": "ipv4", 00:08:39.320 "trsvcid": "$NVMF_PORT", 00:08:39.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:39.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:39.320 "hdgst": ${hdgst:-false}, 00:08:39.320 "ddgst": ${ddgst:-false} 00:08:39.320 }, 00:08:39.320 "method": "bdev_nvme_attach_controller" 00:08:39.320 } 00:08:39.320 EOF 00:08:39.320 )") 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:39.320 16:03:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:39.320 "params": { 00:08:39.320 "name": "Nvme0", 00:08:39.320 "trtype": "tcp", 00:08:39.320 "traddr": "10.0.0.3", 00:08:39.320 "adrfam": "ipv4", 00:08:39.320 "trsvcid": "4420", 00:08:39.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:39.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:39.320 "hdgst": false, 00:08:39.320 "ddgst": false 00:08:39.320 }, 00:08:39.320 "method": "bdev_nvme_attach_controller" 00:08:39.320 }' 00:08:39.578 [2024-11-19 16:03:46.037850] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:39.579 [2024-11-19 16:03:46.037961] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75392 ] 00:08:39.579 [2024-11-19 16:03:46.190192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.579 [2024-11-19 16:03:46.215708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.579 [2024-11-19 16:03:46.258170] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.837 Running I/O for 10 seconds... 00:08:39.837 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.837 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:08:39.837 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:39.837 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.837 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.837 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.837 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:39.838 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:40.096 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:40.096 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:40.096 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:40.096 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:40.096 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.096 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.096 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.356 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:08:40.356 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:08:40.356 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:40.357 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:40.357 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:40.357 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.357 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.357 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.357 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.357 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:40.357 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.357 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:40.357 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.357 16:03:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:40.357 [2024-11-19 16:03:46.833287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.833987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.833998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.834007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.834019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.834030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.834041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.834050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.834062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.834071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.834083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.357 [2024-11-19 16:03:46.834092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.357 [2024-11-19 16:03:46.834103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:40.358 [2024-11-19 16:03:46.834756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2018b70 is same with the state(6) to be set 00:08:40.358 [2024-11-19 16:03:46.834950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.358 [2024-11-19 16:03:46.834968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.358 [2024-11-19 16:03:46.834989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.834998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.358 [2024-11-19 16:03:46.835008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.835018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:40.358 [2024-11-19 16:03:46.835027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:40.358 [2024-11-19 16:03:46.835046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da7380 is same with the state(6) to be set 00:08:40.358 [2024-11-19 16:03:46.836195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:08:40.358 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:40.358 00:08:40.358 Latency(us) 00:08:40.358 [2024-11-19T16:03:47.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.358 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:40.358 Job: Nvme0n1 ended in about 0.47 seconds with error 00:08:40.358 Verification LBA range: start 0x0 length 0x400 00:08:40.358 Nvme0n1 : 0.47 1351.74 84.48 135.17 0.00 41429.50 2323.55 44326.17 00:08:40.358 [2024-11-19T16:03:47.073Z] =================================================================================================================== 00:08:40.358 [2024-11-19T16:03:47.073Z] Total : 1351.74 84.48 135.17 0.00 41429.50 2323.55 44326.17 00:08:40.358 [2024-11-19 16:03:46.838653] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.358 [2024-11-19 16:03:46.838700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da7380 (9): Bad file descriptor 00:08:40.359 [2024-11-19 16:03:46.847948] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 75392 00:08:41.300 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (75392) - No such process 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:41.300 { 00:08:41.300 "params": { 00:08:41.300 "name": "Nvme$subsystem", 00:08:41.300 "trtype": "$TEST_TRANSPORT", 00:08:41.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:41.300 "adrfam": "ipv4", 00:08:41.300 "trsvcid": "$NVMF_PORT", 00:08:41.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:41.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:41.300 "hdgst": ${hdgst:-false}, 00:08:41.300 "ddgst": ${ddgst:-false} 00:08:41.300 }, 00:08:41.300 "method": "bdev_nvme_attach_controller" 00:08:41.300 } 00:08:41.300 EOF 00:08:41.300 )") 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:08:41.300 16:03:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.300 "params": { 00:08:41.300 "name": "Nvme0", 00:08:41.300 "trtype": "tcp", 00:08:41.300 "traddr": "10.0.0.3", 00:08:41.300 "adrfam": "ipv4", 00:08:41.300 "trsvcid": "4420", 00:08:41.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:41.300 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:41.300 "hdgst": false, 00:08:41.300 "ddgst": false 00:08:41.300 }, 00:08:41.300 "method": "bdev_nvme_attach_controller" 00:08:41.300 }' 00:08:41.300 [2024-11-19 16:03:47.894912] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:41.300 [2024-11-19 16:03:47.895717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75432 ] 00:08:41.559 [2024-11-19 16:03:48.045842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.559 [2024-11-19 16:03:48.066214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.559 [2024-11-19 16:03:48.103710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.559 Running I/O for 1 seconds... 00:08:42.763 1536.00 IOPS, 96.00 MiB/s 00:08:42.763 Latency(us) 00:08:42.763 [2024-11-19T16:03:49.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.763 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:42.763 Verification LBA range: start 0x0 length 0x400 00:08:42.763 Nvme0n1 : 1.02 1562.54 97.66 0.00 0.00 40187.15 4021.53 36223.53 00:08:42.763 [2024-11-19T16:03:49.478Z] =================================================================================================================== 00:08:42.763 [2024-11-19T16:03:49.478Z] Total : 1562.54 97.66 0.00 0.00 40187.15 4021.53 36223.53 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.763 rmmod nvme_tcp 00:08:42.763 rmmod nvme_fabrics 00:08:42.763 rmmod nvme_keyring 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 75340 ']' 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 75340 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 75340 ']' 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 75340 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.763 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75340 00:08:43.023 killing process with pid 75340 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75340' 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 75340 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 75340 00:08:43.023 [2024-11-19 16:03:49.609753] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:43.023 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:43.024 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:43.282 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:43.282 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:43.282 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:43.283 00:08:43.283 real 0m5.132s 00:08:43.283 user 0m17.891s 00:08:43.283 sys 0m1.410s 00:08:43.283 ************************************ 00:08:43.283 END TEST nvmf_host_management 00:08:43.283 ************************************ 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.283 ************************************ 00:08:43.283 START TEST nvmf_lvol 00:08:43.283 ************************************ 00:08:43.283 16:03:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:43.542 * Looking for test storage... 00:08:43.542 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:43.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.542 --rc genhtml_branch_coverage=1 00:08:43.542 --rc genhtml_function_coverage=1 00:08:43.542 --rc genhtml_legend=1 00:08:43.542 --rc geninfo_all_blocks=1 00:08:43.542 --rc geninfo_unexecuted_blocks=1 00:08:43.542 00:08:43.542 ' 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:43.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.542 --rc genhtml_branch_coverage=1 00:08:43.542 --rc genhtml_function_coverage=1 00:08:43.542 --rc genhtml_legend=1 00:08:43.542 --rc geninfo_all_blocks=1 00:08:43.542 --rc geninfo_unexecuted_blocks=1 00:08:43.542 00:08:43.542 ' 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:43.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.542 --rc genhtml_branch_coverage=1 00:08:43.542 --rc genhtml_function_coverage=1 00:08:43.542 --rc genhtml_legend=1 00:08:43.542 --rc geninfo_all_blocks=1 00:08:43.542 --rc geninfo_unexecuted_blocks=1 00:08:43.542 00:08:43.542 ' 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:43.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.542 --rc genhtml_branch_coverage=1 00:08:43.542 --rc genhtml_function_coverage=1 00:08:43.542 --rc genhtml_legend=1 00:08:43.542 --rc geninfo_all_blocks=1 00:08:43.542 --rc geninfo_unexecuted_blocks=1 00:08:43.542 00:08:43.542 ' 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.542 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:43.543 Cannot find device "nvmf_init_br" 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:43.543 Cannot find device "nvmf_init_br2" 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:43.543 Cannot find device "nvmf_tgt_br" 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:43.543 Cannot find device "nvmf_tgt_br2" 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:43.543 Cannot find device "nvmf_init_br" 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:43.543 Cannot find device "nvmf_init_br2" 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:43.543 Cannot find device "nvmf_tgt_br" 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:43.543 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:43.802 Cannot find device "nvmf_tgt_br2" 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:43.802 Cannot find device "nvmf_br" 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:43.802 Cannot find device "nvmf_init_if" 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:43.802 Cannot find device "nvmf_init_if2" 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:43.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:43.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:43.802 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:43.803 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:43.803 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:08:43.803 00:08:43.803 --- 10.0.0.3 ping statistics --- 00:08:43.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.803 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:43.803 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:43.803 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:08:43.803 00:08:43.803 --- 10.0.0.4 ping statistics --- 00:08:43.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:43.803 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:08:43.803 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:44.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:08:44.062 00:08:44.062 --- 10.0.0.1 ping statistics --- 00:08:44.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.062 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:44.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:08:44.062 00:08:44.062 --- 10.0.0.2 ping statistics --- 00:08:44.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.062 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=75701 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 75701 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 75701 ']' 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.062 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:44.062 [2024-11-19 16:03:50.627293] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:44.062 [2024-11-19 16:03:50.627922] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.321 [2024-11-19 16:03:50.783878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:44.321 [2024-11-19 16:03:50.809371] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.322 [2024-11-19 16:03:50.809441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.322 [2024-11-19 16:03:50.809455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.322 [2024-11-19 16:03:50.809466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.322 [2024-11-19 16:03:50.809474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.322 [2024-11-19 16:03:50.810420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.322 [2024-11-19 16:03:50.810573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.322 [2024-11-19 16:03:50.810582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.322 [2024-11-19 16:03:50.846500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.322 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.322 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:44.322 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:44.322 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:44.322 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:44.322 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.322 16:03:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:44.580 [2024-11-19 16:03:51.230511] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.580 16:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:44.839 16:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:44.839 16:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:45.097 16:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:45.097 16:03:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:45.356 16:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:45.922 16:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=0c4279ab-1042-431d-98f5-aa55505c1c74 00:08:45.922 16:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0c4279ab-1042-431d-98f5-aa55505c1c74 lvol 20 00:08:45.922 16:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=02d962e0-3565-4c0e-8d94-b547ba005bad 00:08:45.922 16:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:46.489 16:03:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 02d962e0-3565-4c0e-8d94-b547ba005bad 00:08:46.489 16:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:46.747 [2024-11-19 16:03:53.421978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:46.747 16:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:47.006 16:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=75775 00:08:47.006 16:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:47.006 16:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:48.384 16:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 02d962e0-3565-4c0e-8d94-b547ba005bad MY_SNAPSHOT 00:08:48.384 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d6b7c010-ef18-4cdb-9391-fc2021457b17 00:08:48.384 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 02d962e0-3565-4c0e-8d94-b547ba005bad 30 00:08:48.949 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone d6b7c010-ef18-4cdb-9391-fc2021457b17 MY_CLONE 00:08:48.949 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=656b7ff7-2937-458c-88f8-1627b675353b 00:08:48.949 16:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 656b7ff7-2937-458c-88f8-1627b675353b 00:08:49.516 16:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 75775 00:08:57.650 Initializing NVMe Controllers 00:08:57.650 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:08:57.650 Controller IO queue size 128, less than required. 00:08:57.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:57.651 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:57.651 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:57.651 Initialization complete. Launching workers. 00:08:57.651 ======================================================== 00:08:57.651 Latency(us) 00:08:57.651 Device Information : IOPS MiB/s Average min max 00:08:57.651 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10682.40 41.73 11985.01 1895.82 72288.71 00:08:57.651 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10721.60 41.88 11937.58 2480.80 73286.58 00:08:57.651 ======================================================== 00:08:57.651 Total : 21404.00 83.61 11961.26 1895.82 73286.58 00:08:57.651 00:08:57.651 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:57.651 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 02d962e0-3565-4c0e-8d94-b547ba005bad 00:08:57.909 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0c4279ab-1042-431d-98f5-aa55505c1c74 00:08:58.168 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:58.168 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:58.168 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:58.168 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:58.168 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:58.168 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:58.168 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:58.168 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:58.168 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:58.168 rmmod nvme_tcp 00:08:58.168 rmmod nvme_fabrics 00:08:58.168 rmmod nvme_keyring 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 75701 ']' 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 75701 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 75701 ']' 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 75701 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75701 00:08:58.427 killing process with pid 75701 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75701' 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 75701 00:08:58.427 16:04:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 75701 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:58.427 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:08:58.686 00:08:58.686 real 0m15.375s 00:08:58.686 user 1m3.578s 00:08:58.686 sys 0m4.333s 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:58.686 ************************************ 00:08:58.686 END TEST nvmf_lvol 00:08:58.686 ************************************ 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.686 ************************************ 00:08:58.686 START TEST nvmf_lvs_grow 00:08:58.686 ************************************ 00:08:58.686 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:58.946 * Looking for test storage... 00:08:58.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.946 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.947 --rc genhtml_branch_coverage=1 00:08:58.947 --rc genhtml_function_coverage=1 00:08:58.947 --rc genhtml_legend=1 00:08:58.947 --rc geninfo_all_blocks=1 00:08:58.947 --rc geninfo_unexecuted_blocks=1 00:08:58.947 00:08:58.947 ' 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.947 --rc genhtml_branch_coverage=1 00:08:58.947 --rc genhtml_function_coverage=1 00:08:58.947 --rc genhtml_legend=1 00:08:58.947 --rc geninfo_all_blocks=1 00:08:58.947 --rc geninfo_unexecuted_blocks=1 00:08:58.947 00:08:58.947 ' 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.947 --rc genhtml_branch_coverage=1 00:08:58.947 --rc genhtml_function_coverage=1 00:08:58.947 --rc genhtml_legend=1 00:08:58.947 --rc geninfo_all_blocks=1 00:08:58.947 --rc geninfo_unexecuted_blocks=1 00:08:58.947 00:08:58.947 ' 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.947 --rc genhtml_branch_coverage=1 00:08:58.947 --rc genhtml_function_coverage=1 00:08:58.947 --rc genhtml_legend=1 00:08:58.947 --rc geninfo_all_blocks=1 00:08:58.947 --rc geninfo_unexecuted_blocks=1 00:08:58.947 00:08:58.947 ' 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:58.947 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:58.947 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:58.948 Cannot find device "nvmf_init_br" 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:58.948 Cannot find device "nvmf_init_br2" 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:58.948 Cannot find device "nvmf_tgt_br" 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:58.948 Cannot find device "nvmf_tgt_br2" 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:58.948 Cannot find device "nvmf_init_br" 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:58.948 Cannot find device "nvmf_init_br2" 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:58.948 Cannot find device "nvmf_tgt_br" 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:58.948 Cannot find device "nvmf_tgt_br2" 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:08:58.948 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:59.207 Cannot find device "nvmf_br" 00:08:59.207 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:08:59.207 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:59.207 Cannot find device "nvmf_init_if" 00:08:59.207 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:59.208 Cannot find device "nvmf_init_if2" 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:59.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:59.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:59.208 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:59.208 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:08:59.208 00:08:59.208 --- 10.0.0.3 ping statistics --- 00:08:59.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.208 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:59.208 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:59.208 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:08:59.208 00:08:59.208 --- 10.0.0.4 ping statistics --- 00:08:59.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.208 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:59.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:59.208 00:08:59.208 --- 10.0.0.1 ping statistics --- 00:08:59.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.208 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:59.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:08:59.208 00:08:59.208 --- 10.0.0.2 ping statistics --- 00:08:59.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.208 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:59.208 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.467 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=76153 00:08:59.467 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 76153 00:08:59.467 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:59.467 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 76153 ']' 00:08:59.467 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.467 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.467 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.467 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.467 16:04:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.467 [2024-11-19 16:04:05.994696] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:08:59.467 [2024-11-19 16:04:05.994815] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.467 [2024-11-19 16:04:06.146219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.467 [2024-11-19 16:04:06.163950] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.467 [2024-11-19 16:04:06.164183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.467 [2024-11-19 16:04:06.164201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.467 [2024-11-19 16:04:06.164209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.467 [2024-11-19 16:04:06.164215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.467 [2024-11-19 16:04:06.164556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.726 [2024-11-19 16:04:06.192822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:59.726 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.726 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:59.726 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.726 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:59.726 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.726 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.726 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:59.986 [2024-11-19 16:04:06.579941] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:59.986 ************************************ 00:08:59.986 START TEST lvs_grow_clean 00:08:59.986 ************************************ 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:08:59.986 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:00.554 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:00.555 16:04:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:00.555 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:00.555 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:00.555 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:00.813 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:00.813 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:00.813 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 lvol 150 00:09:01.072 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=78362cea-f84b-4a99-aca6-c1c85a770f74 00:09:01.072 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:01.072 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:01.331 [2024-11-19 16:04:07.951155] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:01.331 [2024-11-19 16:04:07.951234] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:01.331 true 00:09:01.332 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:01.332 16:04:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:01.590 16:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:01.590 16:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:01.849 16:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 78362cea-f84b-4a99-aca6-c1c85a770f74 00:09:02.108 16:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:02.367 [2024-11-19 16:04:08.971750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:02.367 16:04:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:02.628 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76229 00:09:02.628 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:02.628 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:02.628 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76229 /var/tmp/bdevperf.sock 00:09:02.628 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 76229 ']' 00:09:02.628 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:02.628 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.628 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:02.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:02.628 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.628 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:02.887 [2024-11-19 16:04:09.347977] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:09:02.887 [2024-11-19 16:04:09.348079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76229 ] 00:09:02.887 [2024-11-19 16:04:09.499580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.887 [2024-11-19 16:04:09.524072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:02.887 [2024-11-19 16:04:09.557365] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:02.887 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.145 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:03.145 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:03.404 Nvme0n1 00:09:03.404 16:04:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:03.663 [ 00:09:03.663 { 00:09:03.663 "name": "Nvme0n1", 00:09:03.663 "aliases": [ 00:09:03.663 "78362cea-f84b-4a99-aca6-c1c85a770f74" 00:09:03.663 ], 00:09:03.663 "product_name": "NVMe disk", 00:09:03.663 "block_size": 4096, 00:09:03.663 "num_blocks": 38912, 00:09:03.663 "uuid": "78362cea-f84b-4a99-aca6-c1c85a770f74", 00:09:03.663 "numa_id": -1, 00:09:03.663 "assigned_rate_limits": { 00:09:03.663 "rw_ios_per_sec": 0, 00:09:03.663 "rw_mbytes_per_sec": 0, 00:09:03.663 "r_mbytes_per_sec": 0, 00:09:03.663 "w_mbytes_per_sec": 0 00:09:03.663 }, 00:09:03.663 "claimed": false, 00:09:03.663 "zoned": false, 00:09:03.663 "supported_io_types": { 00:09:03.663 "read": true, 00:09:03.663 "write": true, 00:09:03.663 "unmap": true, 00:09:03.663 "flush": true, 00:09:03.663 "reset": true, 00:09:03.663 "nvme_admin": true, 00:09:03.663 "nvme_io": true, 00:09:03.663 "nvme_io_md": false, 00:09:03.663 "write_zeroes": true, 00:09:03.663 "zcopy": false, 00:09:03.663 "get_zone_info": false, 00:09:03.663 "zone_management": false, 00:09:03.663 "zone_append": false, 00:09:03.663 "compare": true, 00:09:03.663 "compare_and_write": true, 00:09:03.663 "abort": true, 00:09:03.663 "seek_hole": false, 00:09:03.663 "seek_data": false, 00:09:03.663 "copy": true, 00:09:03.663 "nvme_iov_md": false 00:09:03.663 }, 00:09:03.663 "memory_domains": [ 00:09:03.663 { 00:09:03.663 "dma_device_id": "system", 00:09:03.663 "dma_device_type": 1 00:09:03.663 } 00:09:03.663 ], 00:09:03.663 "driver_specific": { 00:09:03.663 "nvme": [ 00:09:03.663 { 00:09:03.663 "trid": { 00:09:03.663 "trtype": "TCP", 00:09:03.663 "adrfam": "IPv4", 00:09:03.663 "traddr": "10.0.0.3", 00:09:03.663 "trsvcid": "4420", 00:09:03.663 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:03.663 }, 00:09:03.663 "ctrlr_data": { 00:09:03.663 "cntlid": 1, 00:09:03.663 "vendor_id": "0x8086", 00:09:03.663 "model_number": "SPDK bdev Controller", 00:09:03.663 "serial_number": "SPDK0", 00:09:03.663 "firmware_revision": "25.01", 00:09:03.663 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:03.663 "oacs": { 00:09:03.663 "security": 0, 00:09:03.663 "format": 0, 00:09:03.663 "firmware": 0, 00:09:03.663 "ns_manage": 0 00:09:03.663 }, 00:09:03.663 "multi_ctrlr": true, 00:09:03.663 "ana_reporting": false 00:09:03.663 }, 00:09:03.663 "vs": { 00:09:03.663 "nvme_version": "1.3" 00:09:03.663 }, 00:09:03.663 "ns_data": { 00:09:03.663 "id": 1, 00:09:03.664 "can_share": true 00:09:03.664 } 00:09:03.664 } 00:09:03.664 ], 00:09:03.664 "mp_policy": "active_passive" 00:09:03.664 } 00:09:03.664 } 00:09:03.664 ] 00:09:03.664 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76245 00:09:03.664 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:03.664 16:04:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:03.664 Running I/O for 10 seconds... 00:09:05.040 Latency(us) 00:09:05.040 [2024-11-19T16:04:11.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.040 Nvme0n1 : 1.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:05.040 [2024-11-19T16:04:11.755Z] =================================================================================================================== 00:09:05.040 [2024-11-19T16:04:11.755Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:05.040 00:09:05.607 16:04:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:05.866 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.866 Nvme0n1 : 2.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:05.866 [2024-11-19T16:04:12.581Z] =================================================================================================================== 00:09:05.866 [2024-11-19T16:04:12.581Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:05.866 00:09:05.866 true 00:09:05.866 16:04:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:05.866 16:04:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:06.435 16:04:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:06.435 16:04:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:06.435 16:04:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 76245 00:09:06.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.694 Nvme0n1 : 3.00 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:09:06.694 [2024-11-19T16:04:13.409Z] =================================================================================================================== 00:09:06.694 [2024-11-19T16:04:13.409Z] Total : 6646.33 25.96 0.00 0.00 0.00 0.00 0.00 00:09:06.694 00:09:08.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.072 Nvme0n1 : 4.00 6635.75 25.92 0.00 0.00 0.00 0.00 0.00 00:09:08.072 [2024-11-19T16:04:14.787Z] =================================================================================================================== 00:09:08.072 [2024-11-19T16:04:14.787Z] Total : 6635.75 25.92 0.00 0.00 0.00 0.00 0.00 00:09:08.072 00:09:08.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.641 Nvme0n1 : 5.00 6629.40 25.90 0.00 0.00 0.00 0.00 0.00 00:09:08.641 [2024-11-19T16:04:15.356Z] =================================================================================================================== 00:09:08.641 [2024-11-19T16:04:15.356Z] Total : 6629.40 25.90 0.00 0.00 0.00 0.00 0.00 00:09:08.641 00:09:10.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.021 Nvme0n1 : 6.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:10.021 [2024-11-19T16:04:16.736Z] =================================================================================================================== 00:09:10.021 [2024-11-19T16:04:16.736Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:10.021 00:09:10.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.959 Nvme0n1 : 7.00 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:10.959 [2024-11-19T16:04:17.674Z] =================================================================================================================== 00:09:10.959 [2024-11-19T16:04:17.674Z] Total : 6604.00 25.80 0.00 0.00 0.00 0.00 0.00 00:09:10.959 00:09:11.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.898 Nvme0n1 : 8.00 6514.12 25.45 0.00 0.00 0.00 0.00 0.00 00:09:11.898 [2024-11-19T16:04:18.613Z] =================================================================================================================== 00:09:11.898 [2024-11-19T16:04:18.613Z] Total : 6514.12 25.45 0.00 0.00 0.00 0.00 0.00 00:09:11.898 00:09:12.863 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.863 Nvme0n1 : 9.00 6481.78 25.32 0.00 0.00 0.00 0.00 0.00 00:09:12.863 [2024-11-19T16:04:19.578Z] =================================================================================================================== 00:09:12.863 [2024-11-19T16:04:19.578Z] Total : 6481.78 25.32 0.00 0.00 0.00 0.00 0.00 00:09:12.863 00:09:13.802 00:09:13.802 Latency(us) 00:09:13.802 [2024-11-19T16:04:20.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.802 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.802 Nvme0n1 : 10.00 6467.96 25.27 0.00 0.00 19785.29 7804.74 100567.97 00:09:13.802 [2024-11-19T16:04:20.517Z] =================================================================================================================== 00:09:13.802 [2024-11-19T16:04:20.517Z] Total : 6467.96 25.27 0.00 0.00 19785.29 7804.74 100567.97 00:09:13.802 { 00:09:13.802 "results": [ 00:09:13.802 { 00:09:13.802 "job": "Nvme0n1", 00:09:13.802 "core_mask": "0x2", 00:09:13.802 "workload": "randwrite", 00:09:13.802 "status": "finished", 00:09:13.802 "queue_depth": 128, 00:09:13.802 "io_size": 4096, 00:09:13.802 "runtime": 10.001146, 00:09:13.802 "iops": 6467.958771924737, 00:09:13.802 "mibps": 25.265463952831006, 00:09:13.802 "io_failed": 0, 00:09:13.802 "io_timeout": 0, 00:09:13.802 "avg_latency_us": 19785.294149477835, 00:09:13.802 "min_latency_us": 7804.741818181818, 00:09:13.802 "max_latency_us": 100567.9709090909 00:09:13.802 } 00:09:13.802 ], 00:09:13.802 "core_count": 1 00:09:13.802 } 00:09:13.802 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76229 00:09:13.802 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 76229 ']' 00:09:13.802 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 76229 00:09:13.802 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:13.802 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.802 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76229 00:09:13.802 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:13.802 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:13.802 killing process with pid 76229 00:09:13.802 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76229' 00:09:13.802 Received shutdown signal, test time was about 10.000000 seconds 00:09:13.802 00:09:13.802 Latency(us) 00:09:13.802 [2024-11-19T16:04:20.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.802 [2024-11-19T16:04:20.517Z] =================================================================================================================== 00:09:13.802 [2024-11-19T16:04:20.517Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:13.802 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 76229 00:09:13.802 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 76229 00:09:14.061 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:14.320 16:04:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:14.580 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:14.580 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:14.839 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:14.839 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:14.839 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:15.099 [2024-11-19 16:04:21.600734] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:15.099 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:15.358 request: 00:09:15.358 { 00:09:15.358 "uuid": "031b648b-f3fc-4a40-be22-c5bbfa1bee50", 00:09:15.358 "method": "bdev_lvol_get_lvstores", 00:09:15.358 "req_id": 1 00:09:15.358 } 00:09:15.358 Got JSON-RPC error response 00:09:15.358 response: 00:09:15.358 { 00:09:15.358 "code": -19, 00:09:15.358 "message": "No such device" 00:09:15.358 } 00:09:15.358 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:15.358 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.358 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:15.358 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.358 16:04:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:15.617 aio_bdev 00:09:15.617 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 78362cea-f84b-4a99-aca6-c1c85a770f74 00:09:15.617 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=78362cea-f84b-4a99-aca6-c1c85a770f74 00:09:15.617 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:15.617 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:15.617 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:15.617 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:15.617 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:15.877 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78362cea-f84b-4a99-aca6-c1c85a770f74 -t 2000 00:09:16.136 [ 00:09:16.136 { 00:09:16.136 "name": "78362cea-f84b-4a99-aca6-c1c85a770f74", 00:09:16.136 "aliases": [ 00:09:16.136 "lvs/lvol" 00:09:16.136 ], 00:09:16.136 "product_name": "Logical Volume", 00:09:16.136 "block_size": 4096, 00:09:16.136 "num_blocks": 38912, 00:09:16.136 "uuid": "78362cea-f84b-4a99-aca6-c1c85a770f74", 00:09:16.136 "assigned_rate_limits": { 00:09:16.136 "rw_ios_per_sec": 0, 00:09:16.136 "rw_mbytes_per_sec": 0, 00:09:16.136 "r_mbytes_per_sec": 0, 00:09:16.136 "w_mbytes_per_sec": 0 00:09:16.136 }, 00:09:16.136 "claimed": false, 00:09:16.136 "zoned": false, 00:09:16.136 "supported_io_types": { 00:09:16.136 "read": true, 00:09:16.136 "write": true, 00:09:16.136 "unmap": true, 00:09:16.136 "flush": false, 00:09:16.136 "reset": true, 00:09:16.136 "nvme_admin": false, 00:09:16.136 "nvme_io": false, 00:09:16.136 "nvme_io_md": false, 00:09:16.136 "write_zeroes": true, 00:09:16.136 "zcopy": false, 00:09:16.136 "get_zone_info": false, 00:09:16.136 "zone_management": false, 00:09:16.136 "zone_append": false, 00:09:16.136 "compare": false, 00:09:16.136 "compare_and_write": false, 00:09:16.136 "abort": false, 00:09:16.136 "seek_hole": true, 00:09:16.136 "seek_data": true, 00:09:16.136 "copy": false, 00:09:16.136 "nvme_iov_md": false 00:09:16.136 }, 00:09:16.136 "driver_specific": { 00:09:16.136 "lvol": { 00:09:16.136 "lvol_store_uuid": "031b648b-f3fc-4a40-be22-c5bbfa1bee50", 00:09:16.136 "base_bdev": "aio_bdev", 00:09:16.136 "thin_provision": false, 00:09:16.136 "num_allocated_clusters": 38, 00:09:16.136 "snapshot": false, 00:09:16.136 "clone": false, 00:09:16.136 "esnap_clone": false 00:09:16.136 } 00:09:16.136 } 00:09:16.136 } 00:09:16.136 ] 00:09:16.136 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:16.136 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:16.136 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:16.395 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:16.395 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:16.395 16:04:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:16.655 16:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:16.655 16:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 78362cea-f84b-4a99-aca6-c1c85a770f74 00:09:16.914 16:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 031b648b-f3fc-4a40-be22-c5bbfa1bee50 00:09:17.173 16:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:17.433 16:04:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:17.693 ************************************ 00:09:17.693 END TEST lvs_grow_clean 00:09:17.693 ************************************ 00:09:17.693 00:09:17.693 real 0m17.713s 00:09:17.693 user 0m16.646s 00:09:17.693 sys 0m2.419s 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:17.693 ************************************ 00:09:17.693 START TEST lvs_grow_dirty 00:09:17.693 ************************************ 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:17.693 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:18.261 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:18.261 16:04:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:18.520 16:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:18.520 16:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:18.520 16:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:18.780 16:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:18.780 16:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:18.780 16:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 22a3971b-ce25-4b68-a59f-682cb8d0679d lvol 150 00:09:19.039 16:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f1709c08-926e-43fe-807a-46adabc8464f 00:09:19.039 16:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:19.039 16:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:19.299 [2024-11-19 16:04:25.800084] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:19.299 [2024-11-19 16:04:25.800216] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:19.299 true 00:09:19.299 16:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:19.299 16:04:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:19.558 16:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:19.559 16:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:19.817 16:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f1709c08-926e-43fe-807a-46adabc8464f 00:09:20.077 16:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:20.077 [2024-11-19 16:04:26.784553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:20.336 16:04:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:20.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:20.336 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=76491 00:09:20.336 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:20.336 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:20.336 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 76491 /var/tmp/bdevperf.sock 00:09:20.336 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 76491 ']' 00:09:20.336 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:20.336 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.336 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:20.336 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.336 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:20.596 [2024-11-19 16:04:27.072101] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:09:20.596 [2024-11-19 16:04:27.072386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76491 ] 00:09:20.596 [2024-11-19 16:04:27.221736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.596 [2024-11-19 16:04:27.246399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.596 [2024-11-19 16:04:27.280450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.855 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.855 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:20.855 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:21.114 Nvme0n1 00:09:21.114 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:21.373 [ 00:09:21.373 { 00:09:21.373 "name": "Nvme0n1", 00:09:21.373 "aliases": [ 00:09:21.373 "f1709c08-926e-43fe-807a-46adabc8464f" 00:09:21.373 ], 00:09:21.373 "product_name": "NVMe disk", 00:09:21.373 "block_size": 4096, 00:09:21.373 "num_blocks": 38912, 00:09:21.373 "uuid": "f1709c08-926e-43fe-807a-46adabc8464f", 00:09:21.373 "numa_id": -1, 00:09:21.373 "assigned_rate_limits": { 00:09:21.373 "rw_ios_per_sec": 0, 00:09:21.373 "rw_mbytes_per_sec": 0, 00:09:21.373 "r_mbytes_per_sec": 0, 00:09:21.373 "w_mbytes_per_sec": 0 00:09:21.373 }, 00:09:21.373 "claimed": false, 00:09:21.373 "zoned": false, 00:09:21.373 "supported_io_types": { 00:09:21.373 "read": true, 00:09:21.373 "write": true, 00:09:21.373 "unmap": true, 00:09:21.373 "flush": true, 00:09:21.373 "reset": true, 00:09:21.373 "nvme_admin": true, 00:09:21.373 "nvme_io": true, 00:09:21.373 "nvme_io_md": false, 00:09:21.373 "write_zeroes": true, 00:09:21.373 "zcopy": false, 00:09:21.373 "get_zone_info": false, 00:09:21.373 "zone_management": false, 00:09:21.373 "zone_append": false, 00:09:21.373 "compare": true, 00:09:21.373 "compare_and_write": true, 00:09:21.373 "abort": true, 00:09:21.373 "seek_hole": false, 00:09:21.373 "seek_data": false, 00:09:21.373 "copy": true, 00:09:21.373 "nvme_iov_md": false 00:09:21.373 }, 00:09:21.373 "memory_domains": [ 00:09:21.373 { 00:09:21.373 "dma_device_id": "system", 00:09:21.373 "dma_device_type": 1 00:09:21.373 } 00:09:21.373 ], 00:09:21.373 "driver_specific": { 00:09:21.373 "nvme": [ 00:09:21.373 { 00:09:21.373 "trid": { 00:09:21.373 "trtype": "TCP", 00:09:21.373 "adrfam": "IPv4", 00:09:21.373 "traddr": "10.0.0.3", 00:09:21.373 "trsvcid": "4420", 00:09:21.373 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:21.373 }, 00:09:21.373 "ctrlr_data": { 00:09:21.373 "cntlid": 1, 00:09:21.373 "vendor_id": "0x8086", 00:09:21.373 "model_number": "SPDK bdev Controller", 00:09:21.373 "serial_number": "SPDK0", 00:09:21.373 "firmware_revision": "25.01", 00:09:21.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:21.373 "oacs": { 00:09:21.373 "security": 0, 00:09:21.373 "format": 0, 00:09:21.373 "firmware": 0, 00:09:21.373 "ns_manage": 0 00:09:21.373 }, 00:09:21.373 "multi_ctrlr": true, 00:09:21.373 "ana_reporting": false 00:09:21.373 }, 00:09:21.373 "vs": { 00:09:21.373 "nvme_version": "1.3" 00:09:21.373 }, 00:09:21.373 "ns_data": { 00:09:21.373 "id": 1, 00:09:21.373 "can_share": true 00:09:21.373 } 00:09:21.373 } 00:09:21.373 ], 00:09:21.373 "mp_policy": "active_passive" 00:09:21.373 } 00:09:21.373 } 00:09:21.373 ] 00:09:21.373 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:21.373 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=76502 00:09:21.374 16:04:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:21.374 Running I/O for 10 seconds... 00:09:22.311 Latency(us) 00:09:22.311 [2024-11-19T16:04:29.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:22.311 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.311 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:22.311 [2024-11-19T16:04:29.026Z] =================================================================================================================== 00:09:22.311 [2024-11-19T16:04:29.026Z] Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:22.311 00:09:23.256 16:04:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:23.515 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.515 Nvme0n1 : 2.00 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:23.515 [2024-11-19T16:04:30.230Z] =================================================================================================================== 00:09:23.515 [2024-11-19T16:04:30.230Z] Total : 6667.50 26.04 0.00 0.00 0.00 0.00 0.00 00:09:23.515 00:09:23.515 true 00:09:23.515 16:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:23.515 16:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:24.083 16:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:24.083 16:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:24.083 16:04:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 76502 00:09:24.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.341 Nvme0n1 : 3.00 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:09:24.341 [2024-11-19T16:04:31.056Z] =================================================================================================================== 00:09:24.341 [2024-11-19T16:04:31.056Z] Total : 6519.33 25.47 0.00 0.00 0.00 0.00 0.00 00:09:24.341 00:09:25.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.279 Nvme0n1 : 4.00 6445.25 25.18 0.00 0.00 0.00 0.00 0.00 00:09:25.279 [2024-11-19T16:04:31.994Z] =================================================================================================================== 00:09:25.279 [2024-11-19T16:04:31.994Z] Total : 6445.25 25.18 0.00 0.00 0.00 0.00 0.00 00:09:25.279 00:09:26.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.657 Nvme0n1 : 5.00 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:09:26.657 [2024-11-19T16:04:33.372Z] =================================================================================================================== 00:09:26.657 [2024-11-19T16:04:33.372Z] Total : 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:09:26.657 00:09:27.593 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.593 Nvme0n1 : 6.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:27.593 [2024-11-19T16:04:34.308Z] =================================================================================================================== 00:09:27.593 [2024-11-19T16:04:34.308Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:09:27.593 00:09:28.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.530 Nvme0n1 : 7.00 6386.29 24.95 0.00 0.00 0.00 0.00 0.00 00:09:28.530 [2024-11-19T16:04:35.245Z] =================================================================================================================== 00:09:28.530 [2024-11-19T16:04:35.245Z] Total : 6386.29 24.95 0.00 0.00 0.00 0.00 0.00 00:09:28.530 00:09:29.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.468 Nvme0n1 : 8.00 6278.25 24.52 0.00 0.00 0.00 0.00 0.00 00:09:29.468 [2024-11-19T16:04:36.183Z] =================================================================================================================== 00:09:29.468 [2024-11-19T16:04:36.183Z] Total : 6278.25 24.52 0.00 0.00 0.00 0.00 0.00 00:09:29.468 00:09:30.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.403 Nvme0n1 : 9.00 6272.11 24.50 0.00 0.00 0.00 0.00 0.00 00:09:30.403 [2024-11-19T16:04:37.118Z] =================================================================================================================== 00:09:30.403 [2024-11-19T16:04:37.118Z] Total : 6272.11 24.50 0.00 0.00 0.00 0.00 0.00 00:09:30.403 00:09:31.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.340 Nvme0n1 : 10.00 6292.60 24.58 0.00 0.00 0.00 0.00 0.00 00:09:31.340 [2024-11-19T16:04:38.055Z] =================================================================================================================== 00:09:31.340 [2024-11-19T16:04:38.055Z] Total : 6292.60 24.58 0.00 0.00 0.00 0.00 0.00 00:09:31.340 00:09:31.340 00:09:31.340 Latency(us) 00:09:31.340 [2024-11-19T16:04:38.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.340 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.340 Nvme0n1 : 10.02 6293.67 24.58 0.00 0.00 20332.17 6136.55 183024.17 00:09:31.340 [2024-11-19T16:04:38.055Z] =================================================================================================================== 00:09:31.340 [2024-11-19T16:04:38.055Z] Total : 6293.67 24.58 0.00 0.00 20332.17 6136.55 183024.17 00:09:31.340 { 00:09:31.340 "results": [ 00:09:31.340 { 00:09:31.340 "job": "Nvme0n1", 00:09:31.340 "core_mask": "0x2", 00:09:31.340 "workload": "randwrite", 00:09:31.340 "status": "finished", 00:09:31.340 "queue_depth": 128, 00:09:31.340 "io_size": 4096, 00:09:31.340 "runtime": 10.018643, 00:09:31.340 "iops": 6293.666717139237, 00:09:31.340 "mibps": 24.584635613825146, 00:09:31.340 "io_failed": 0, 00:09:31.340 "io_timeout": 0, 00:09:31.340 "avg_latency_us": 20332.16855370721, 00:09:31.340 "min_latency_us": 6136.552727272728, 00:09:31.340 "max_latency_us": 183024.17454545456 00:09:31.340 } 00:09:31.340 ], 00:09:31.340 "core_count": 1 00:09:31.340 } 00:09:31.340 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 76491 00:09:31.340 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 76491 ']' 00:09:31.340 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 76491 00:09:31.340 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:09:31.340 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:31.340 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76491 00:09:31.340 killing process with pid 76491 00:09:31.340 Received shutdown signal, test time was about 10.000000 seconds 00:09:31.340 00:09:31.340 Latency(us) 00:09:31.340 [2024-11-19T16:04:38.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:31.340 [2024-11-19T16:04:38.055Z] =================================================================================================================== 00:09:31.340 [2024-11-19T16:04:38.055Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:31.340 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:31.340 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:31.340 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76491' 00:09:31.340 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 76491 00:09:31.340 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 76491 00:09:31.599 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:31.857 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:32.116 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:32.116 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:32.375 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:32.375 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:32.375 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 76153 00:09:32.375 16:04:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 76153 00:09:32.375 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 76153 Killed "${NVMF_APP[@]}" "$@" 00:09:32.375 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:32.375 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:32.375 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:32.375 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.375 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.376 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=76640 00:09:32.376 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 76640 00:09:32.376 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:32.376 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 76640 ']' 00:09:32.376 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.376 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.376 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.376 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.376 16:04:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.635 [2024-11-19 16:04:39.096619] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:09:32.635 [2024-11-19 16:04:39.096930] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.635 [2024-11-19 16:04:39.243070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.635 [2024-11-19 16:04:39.261025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.635 [2024-11-19 16:04:39.261308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.635 [2024-11-19 16:04:39.261445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.635 [2024-11-19 16:04:39.261584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.635 [2024-11-19 16:04:39.261601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.635 [2024-11-19 16:04:39.261901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.635 [2024-11-19 16:04:39.290453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:33.571 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.571 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:33.571 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.571 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:33.571 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.571 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.571 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:33.829 [2024-11-19 16:04:40.373856] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:33.829 [2024-11-19 16:04:40.374113] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:33.829 [2024-11-19 16:04:40.374356] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:33.829 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:33.829 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f1709c08-926e-43fe-807a-46adabc8464f 00:09:33.829 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f1709c08-926e-43fe-807a-46adabc8464f 00:09:33.829 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.829 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:33.829 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.829 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.829 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:34.088 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f1709c08-926e-43fe-807a-46adabc8464f -t 2000 00:09:34.346 [ 00:09:34.346 { 00:09:34.346 "name": "f1709c08-926e-43fe-807a-46adabc8464f", 00:09:34.346 "aliases": [ 00:09:34.346 "lvs/lvol" 00:09:34.346 ], 00:09:34.346 "product_name": "Logical Volume", 00:09:34.346 "block_size": 4096, 00:09:34.346 "num_blocks": 38912, 00:09:34.346 "uuid": "f1709c08-926e-43fe-807a-46adabc8464f", 00:09:34.346 "assigned_rate_limits": { 00:09:34.346 "rw_ios_per_sec": 0, 00:09:34.346 "rw_mbytes_per_sec": 0, 00:09:34.346 "r_mbytes_per_sec": 0, 00:09:34.346 "w_mbytes_per_sec": 0 00:09:34.346 }, 00:09:34.347 "claimed": false, 00:09:34.347 "zoned": false, 00:09:34.347 "supported_io_types": { 00:09:34.347 "read": true, 00:09:34.347 "write": true, 00:09:34.347 "unmap": true, 00:09:34.347 "flush": false, 00:09:34.347 "reset": true, 00:09:34.347 "nvme_admin": false, 00:09:34.347 "nvme_io": false, 00:09:34.347 "nvme_io_md": false, 00:09:34.347 "write_zeroes": true, 00:09:34.347 "zcopy": false, 00:09:34.347 "get_zone_info": false, 00:09:34.347 "zone_management": false, 00:09:34.347 "zone_append": false, 00:09:34.347 "compare": false, 00:09:34.347 "compare_and_write": false, 00:09:34.347 "abort": false, 00:09:34.347 "seek_hole": true, 00:09:34.347 "seek_data": true, 00:09:34.347 "copy": false, 00:09:34.347 "nvme_iov_md": false 00:09:34.347 }, 00:09:34.347 "driver_specific": { 00:09:34.347 "lvol": { 00:09:34.347 "lvol_store_uuid": "22a3971b-ce25-4b68-a59f-682cb8d0679d", 00:09:34.347 "base_bdev": "aio_bdev", 00:09:34.347 "thin_provision": false, 00:09:34.347 "num_allocated_clusters": 38, 00:09:34.347 "snapshot": false, 00:09:34.347 "clone": false, 00:09:34.347 "esnap_clone": false 00:09:34.347 } 00:09:34.347 } 00:09:34.347 } 00:09:34.347 ] 00:09:34.347 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:34.347 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:34.347 16:04:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:34.606 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:34.606 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:34.606 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:34.898 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:34.898 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:35.209 [2024-11-19 16:04:41.715878] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:35.209 16:04:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:35.468 request: 00:09:35.468 { 00:09:35.468 "uuid": "22a3971b-ce25-4b68-a59f-682cb8d0679d", 00:09:35.468 "method": "bdev_lvol_get_lvstores", 00:09:35.468 "req_id": 1 00:09:35.468 } 00:09:35.468 Got JSON-RPC error response 00:09:35.468 response: 00:09:35.468 { 00:09:35.468 "code": -19, 00:09:35.468 "message": "No such device" 00:09:35.468 } 00:09:35.468 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:09:35.468 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:35.468 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:35.468 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:35.468 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.726 aio_bdev 00:09:35.726 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f1709c08-926e-43fe-807a-46adabc8464f 00:09:35.726 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f1709c08-926e-43fe-807a-46adabc8464f 00:09:35.727 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:35.727 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:09:35.727 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:35.727 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:35.727 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:35.985 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f1709c08-926e-43fe-807a-46adabc8464f -t 2000 00:09:36.244 [ 00:09:36.244 { 00:09:36.244 "name": "f1709c08-926e-43fe-807a-46adabc8464f", 00:09:36.244 "aliases": [ 00:09:36.244 "lvs/lvol" 00:09:36.244 ], 00:09:36.244 "product_name": "Logical Volume", 00:09:36.244 "block_size": 4096, 00:09:36.244 "num_blocks": 38912, 00:09:36.244 "uuid": "f1709c08-926e-43fe-807a-46adabc8464f", 00:09:36.244 "assigned_rate_limits": { 00:09:36.244 "rw_ios_per_sec": 0, 00:09:36.244 "rw_mbytes_per_sec": 0, 00:09:36.244 "r_mbytes_per_sec": 0, 00:09:36.244 "w_mbytes_per_sec": 0 00:09:36.244 }, 00:09:36.244 "claimed": false, 00:09:36.244 "zoned": false, 00:09:36.244 "supported_io_types": { 00:09:36.244 "read": true, 00:09:36.244 "write": true, 00:09:36.244 "unmap": true, 00:09:36.244 "flush": false, 00:09:36.244 "reset": true, 00:09:36.244 "nvme_admin": false, 00:09:36.244 "nvme_io": false, 00:09:36.244 "nvme_io_md": false, 00:09:36.244 "write_zeroes": true, 00:09:36.244 "zcopy": false, 00:09:36.244 "get_zone_info": false, 00:09:36.244 "zone_management": false, 00:09:36.245 "zone_append": false, 00:09:36.245 "compare": false, 00:09:36.245 "compare_and_write": false, 00:09:36.245 "abort": false, 00:09:36.245 "seek_hole": true, 00:09:36.245 "seek_data": true, 00:09:36.245 "copy": false, 00:09:36.245 "nvme_iov_md": false 00:09:36.245 }, 00:09:36.245 "driver_specific": { 00:09:36.245 "lvol": { 00:09:36.245 "lvol_store_uuid": "22a3971b-ce25-4b68-a59f-682cb8d0679d", 00:09:36.245 "base_bdev": "aio_bdev", 00:09:36.245 "thin_provision": false, 00:09:36.245 "num_allocated_clusters": 38, 00:09:36.245 "snapshot": false, 00:09:36.245 "clone": false, 00:09:36.245 "esnap_clone": false 00:09:36.245 } 00:09:36.245 } 00:09:36.245 } 00:09:36.245 ] 00:09:36.245 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:09:36.245 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:36.245 16:04:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:36.503 16:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:36.503 16:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:36.503 16:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:36.762 16:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:36.762 16:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f1709c08-926e-43fe-807a-46adabc8464f 00:09:37.021 16:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 22a3971b-ce25-4b68-a59f-682cb8d0679d 00:09:37.279 16:04:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:37.538 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:38.104 ************************************ 00:09:38.104 END TEST lvs_grow_dirty 00:09:38.104 ************************************ 00:09:38.104 00:09:38.104 real 0m20.184s 00:09:38.104 user 0m39.005s 00:09:38.104 sys 0m9.455s 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:38.104 nvmf_trace.0 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.104 16:04:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.672 rmmod nvme_tcp 00:09:38.672 rmmod nvme_fabrics 00:09:38.672 rmmod nvme_keyring 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 76640 ']' 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 76640 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 76640 ']' 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 76640 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76640 00:09:38.672 killing process with pid 76640 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76640' 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 76640 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 76640 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:38.672 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:38.931 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:38.931 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:38.931 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:38.931 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:38.931 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:38.931 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:38.931 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:38.931 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:38.931 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:38.931 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:38.932 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.932 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.932 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.932 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:38.932 ************************************ 00:09:38.932 END TEST nvmf_lvs_grow 00:09:38.932 ************************************ 00:09:38.932 00:09:38.932 real 0m40.226s 00:09:38.932 user 1m2.512s 00:09:38.932 sys 0m12.947s 00:09:38.932 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.932 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:38.932 16:04:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:38.932 16:04:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:38.932 16:04:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.932 16:04:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.932 ************************************ 00:09:38.932 START TEST nvmf_bdev_io_wait 00:09:38.932 ************************************ 00:09:38.932 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:39.192 * Looking for test storage... 00:09:39.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.192 --rc genhtml_branch_coverage=1 00:09:39.192 --rc genhtml_function_coverage=1 00:09:39.192 --rc genhtml_legend=1 00:09:39.192 --rc geninfo_all_blocks=1 00:09:39.192 --rc geninfo_unexecuted_blocks=1 00:09:39.192 00:09:39.192 ' 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.192 --rc genhtml_branch_coverage=1 00:09:39.192 --rc genhtml_function_coverage=1 00:09:39.192 --rc genhtml_legend=1 00:09:39.192 --rc geninfo_all_blocks=1 00:09:39.192 --rc geninfo_unexecuted_blocks=1 00:09:39.192 00:09:39.192 ' 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.192 --rc genhtml_branch_coverage=1 00:09:39.192 --rc genhtml_function_coverage=1 00:09:39.192 --rc genhtml_legend=1 00:09:39.192 --rc geninfo_all_blocks=1 00:09:39.192 --rc geninfo_unexecuted_blocks=1 00:09:39.192 00:09:39.192 ' 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.192 --rc genhtml_branch_coverage=1 00:09:39.192 --rc genhtml_function_coverage=1 00:09:39.192 --rc genhtml_legend=1 00:09:39.192 --rc geninfo_all_blocks=1 00:09:39.192 --rc geninfo_unexecuted_blocks=1 00:09:39.192 00:09:39.192 ' 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.192 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.192 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:39.193 Cannot find device "nvmf_init_br" 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:39.193 Cannot find device "nvmf_init_br2" 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:39.193 Cannot find device "nvmf_tgt_br" 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:39.193 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:39.452 Cannot find device "nvmf_tgt_br2" 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:39.452 Cannot find device "nvmf_init_br" 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:39.452 Cannot find device "nvmf_init_br2" 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:39.452 Cannot find device "nvmf_tgt_br" 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:39.452 Cannot find device "nvmf_tgt_br2" 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:39.452 Cannot find device "nvmf_br" 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:39.452 Cannot find device "nvmf_init_if" 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:39.452 Cannot find device "nvmf_init_if2" 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:39.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:39.452 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:39.452 16:04:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:39.452 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:39.711 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:39.711 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:09:39.711 00:09:39.711 --- 10.0.0.3 ping statistics --- 00:09:39.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.711 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:39.711 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:39.711 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:09:39.711 00:09:39.711 --- 10.0.0.4 ping statistics --- 00:09:39.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.711 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:39.711 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:39.711 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:09:39.711 00:09:39.711 --- 10.0.0.1 ping statistics --- 00:09:39.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.711 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:39.711 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:39.711 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:09:39.711 00:09:39.711 --- 10.0.0.2 ping statistics --- 00:09:39.711 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:39.711 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=77014 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 77014 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 77014 ']' 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.711 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.711 [2024-11-19 16:04:46.292987] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:09:39.711 [2024-11-19 16:04:46.293316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.970 [2024-11-19 16:04:46.445194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.970 [2024-11-19 16:04:46.465131] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.970 [2024-11-19 16:04:46.465437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.970 [2024-11-19 16:04:46.465803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.970 [2024-11-19 16:04:46.466003] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.970 [2024-11-19 16:04:46.466336] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.970 [2024-11-19 16:04:46.467357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.970 [2024-11-19 16:04:46.467441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.970 [2024-11-19 16:04:46.467477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.970 [2024-11-19 16:04:46.467478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.970 [2024-11-19 16:04:46.640723] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.970 [2024-11-19 16:04:46.655852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.970 Malloc0 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.970 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:40.230 [2024-11-19 16:04:46.703625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=77036 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=77038 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=77040 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:40.230 { 00:09:40.230 "params": { 00:09:40.230 "name": "Nvme$subsystem", 00:09:40.230 "trtype": "$TEST_TRANSPORT", 00:09:40.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.230 "adrfam": "ipv4", 00:09:40.230 "trsvcid": "$NVMF_PORT", 00:09:40.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.230 "hdgst": ${hdgst:-false}, 00:09:40.230 "ddgst": ${ddgst:-false} 00:09:40.230 }, 00:09:40.230 "method": "bdev_nvme_attach_controller" 00:09:40.230 } 00:09:40.230 EOF 00:09:40.230 )") 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:40.230 { 00:09:40.230 "params": { 00:09:40.230 "name": "Nvme$subsystem", 00:09:40.230 "trtype": "$TEST_TRANSPORT", 00:09:40.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.230 "adrfam": "ipv4", 00:09:40.230 "trsvcid": "$NVMF_PORT", 00:09:40.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.230 "hdgst": ${hdgst:-false}, 00:09:40.230 "ddgst": ${ddgst:-false} 00:09:40.230 }, 00:09:40.230 "method": "bdev_nvme_attach_controller" 00:09:40.230 } 00:09:40.230 EOF 00:09:40.230 )") 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=77043 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:40.230 { 00:09:40.230 "params": { 00:09:40.230 "name": "Nvme$subsystem", 00:09:40.230 "trtype": "$TEST_TRANSPORT", 00:09:40.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.230 "adrfam": "ipv4", 00:09:40.230 "trsvcid": "$NVMF_PORT", 00:09:40.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.230 "hdgst": ${hdgst:-false}, 00:09:40.230 "ddgst": ${ddgst:-false} 00:09:40.230 }, 00:09:40.230 "method": "bdev_nvme_attach_controller" 00:09:40.230 } 00:09:40.230 EOF 00:09:40.230 )") 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:40.230 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:40.230 { 00:09:40.230 "params": { 00:09:40.230 "name": "Nvme$subsystem", 00:09:40.230 "trtype": "$TEST_TRANSPORT", 00:09:40.230 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:40.230 "adrfam": "ipv4", 00:09:40.230 "trsvcid": "$NVMF_PORT", 00:09:40.230 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:40.230 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:40.230 "hdgst": ${hdgst:-false}, 00:09:40.230 "ddgst": ${ddgst:-false} 00:09:40.230 }, 00:09:40.231 "method": "bdev_nvme_attach_controller" 00:09:40.231 } 00:09:40.231 EOF 00:09:40.231 )") 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:40.231 "params": { 00:09:40.231 "name": "Nvme1", 00:09:40.231 "trtype": "tcp", 00:09:40.231 "traddr": "10.0.0.3", 00:09:40.231 "adrfam": "ipv4", 00:09:40.231 "trsvcid": "4420", 00:09:40.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.231 "hdgst": false, 00:09:40.231 "ddgst": false 00:09:40.231 }, 00:09:40.231 "method": "bdev_nvme_attach_controller" 00:09:40.231 }' 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:40.231 "params": { 00:09:40.231 "name": "Nvme1", 00:09:40.231 "trtype": "tcp", 00:09:40.231 "traddr": "10.0.0.3", 00:09:40.231 "adrfam": "ipv4", 00:09:40.231 "trsvcid": "4420", 00:09:40.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.231 "hdgst": false, 00:09:40.231 "ddgst": false 00:09:40.231 }, 00:09:40.231 "method": "bdev_nvme_attach_controller" 00:09:40.231 }' 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:40.231 "params": { 00:09:40.231 "name": "Nvme1", 00:09:40.231 "trtype": "tcp", 00:09:40.231 "traddr": "10.0.0.3", 00:09:40.231 "adrfam": "ipv4", 00:09:40.231 "trsvcid": "4420", 00:09:40.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.231 "hdgst": false, 00:09:40.231 "ddgst": false 00:09:40.231 }, 00:09:40.231 "method": "bdev_nvme_attach_controller" 00:09:40.231 }' 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:40.231 "params": { 00:09:40.231 "name": "Nvme1", 00:09:40.231 "trtype": "tcp", 00:09:40.231 "traddr": "10.0.0.3", 00:09:40.231 "adrfam": "ipv4", 00:09:40.231 "trsvcid": "4420", 00:09:40.231 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:40.231 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:40.231 "hdgst": false, 00:09:40.231 "ddgst": false 00:09:40.231 }, 00:09:40.231 "method": "bdev_nvme_attach_controller" 00:09:40.231 }' 00:09:40.231 16:04:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 77036 00:09:40.231 [2024-11-19 16:04:46.775389] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:09:40.231 [2024-11-19 16:04:46.775608] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:40.231 [2024-11-19 16:04:46.776978] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:09:40.231 [2024-11-19 16:04:46.777049] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:40.231 [2024-11-19 16:04:46.789330] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:09:40.231 [2024-11-19 16:04:46.789551] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:40.231 [2024-11-19 16:04:46.807008] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:09:40.231 [2024-11-19 16:04:46.807333] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:40.490 [2024-11-19 16:04:46.972165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.490 [2024-11-19 16:04:46.989610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:40.490 [2024-11-19 16:04:47.003661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.490 [2024-11-19 16:04:47.010723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.490 [2024-11-19 16:04:47.027215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:40.490 [2024-11-19 16:04:47.041406] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.490 [2024-11-19 16:04:47.054847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.490 [2024-11-19 16:04:47.070899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:40.490 [2024-11-19 16:04:47.084834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.490 [2024-11-19 16:04:47.096135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.490 Running I/O for 1 seconds... 00:09:40.490 [2024-11-19 16:04:47.112623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:09:40.490 [2024-11-19 16:04:47.126742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:40.490 Running I/O for 1 seconds... 00:09:40.490 Running I/O for 1 seconds... 00:09:40.748 Running I/O for 1 seconds... 00:09:41.683 6128.00 IOPS, 23.94 MiB/s 00:09:41.683 Latency(us) 00:09:41.683 [2024-11-19T16:04:48.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.683 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:41.683 Nvme1n1 : 1.02 6122.27 23.92 0.00 0.00 20610.96 7030.23 35270.28 00:09:41.683 [2024-11-19T16:04:48.398Z] =================================================================================================================== 00:09:41.683 [2024-11-19T16:04:48.398Z] Total : 6122.27 23.92 0.00 0.00 20610.96 7030.23 35270.28 00:09:41.683 8130.00 IOPS, 31.76 MiB/s 00:09:41.683 Latency(us) 00:09:41.683 [2024-11-19T16:04:48.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.683 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:41.683 Nvme1n1 : 1.01 8169.51 31.91 0.00 0.00 15576.22 9592.09 26571.87 00:09:41.683 [2024-11-19T16:04:48.398Z] =================================================================================================================== 00:09:41.683 [2024-11-19T16:04:48.398Z] Total : 8169.51 31.91 0.00 0.00 15576.22 9592.09 26571.87 00:09:41.683 164944.00 IOPS, 644.31 MiB/s 00:09:41.683 Latency(us) 00:09:41.683 [2024-11-19T16:04:48.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.683 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:41.683 Nvme1n1 : 1.00 164563.79 642.83 0.00 0.00 773.62 364.92 2293.76 00:09:41.683 [2024-11-19T16:04:48.398Z] =================================================================================================================== 00:09:41.683 [2024-11-19T16:04:48.398Z] Total : 164563.79 642.83 0.00 0.00 773.62 364.92 2293.76 00:09:41.683 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 77038 00:09:41.683 6376.00 IOPS, 24.91 MiB/s 00:09:41.683 Latency(us) 00:09:41.683 [2024-11-19T16:04:48.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.683 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:41.683 Nvme1n1 : 1.01 6511.32 25.43 0.00 0.00 19595.33 5332.25 41943.04 00:09:41.683 [2024-11-19T16:04:48.398Z] =================================================================================================================== 00:09:41.683 [2024-11-19T16:04:48.398Z] Total : 6511.32 25.43 0.00 0.00 19595.33 5332.25 41943.04 00:09:41.683 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 77040 00:09:41.683 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 77043 00:09:41.683 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.683 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.683 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.683 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.683 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:41.683 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:41.683 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:41.684 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:41.943 rmmod nvme_tcp 00:09:41.943 rmmod nvme_fabrics 00:09:41.943 rmmod nvme_keyring 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 77014 ']' 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 77014 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 77014 ']' 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 77014 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77014 00:09:41.943 killing process with pid 77014 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77014' 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 77014 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 77014 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:41.943 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:42.203 ************************************ 00:09:42.203 END TEST nvmf_bdev_io_wait 00:09:42.203 ************************************ 00:09:42.203 00:09:42.203 real 0m3.248s 00:09:42.203 user 0m12.738s 00:09:42.203 sys 0m2.049s 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.203 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.464 16:04:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:42.464 16:04:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:42.464 16:04:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.464 16:04:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.464 ************************************ 00:09:42.464 START TEST nvmf_queue_depth 00:09:42.464 ************************************ 00:09:42.464 16:04:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:42.464 * Looking for test storage... 00:09:42.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:42.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.464 --rc genhtml_branch_coverage=1 00:09:42.464 --rc genhtml_function_coverage=1 00:09:42.464 --rc genhtml_legend=1 00:09:42.464 --rc geninfo_all_blocks=1 00:09:42.464 --rc geninfo_unexecuted_blocks=1 00:09:42.464 00:09:42.464 ' 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:42.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.464 --rc genhtml_branch_coverage=1 00:09:42.464 --rc genhtml_function_coverage=1 00:09:42.464 --rc genhtml_legend=1 00:09:42.464 --rc geninfo_all_blocks=1 00:09:42.464 --rc geninfo_unexecuted_blocks=1 00:09:42.464 00:09:42.464 ' 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:42.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.464 --rc genhtml_branch_coverage=1 00:09:42.464 --rc genhtml_function_coverage=1 00:09:42.464 --rc genhtml_legend=1 00:09:42.464 --rc geninfo_all_blocks=1 00:09:42.464 --rc geninfo_unexecuted_blocks=1 00:09:42.464 00:09:42.464 ' 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:42.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.464 --rc genhtml_branch_coverage=1 00:09:42.464 --rc genhtml_function_coverage=1 00:09:42.464 --rc genhtml_legend=1 00:09:42.464 --rc geninfo_all_blocks=1 00:09:42.464 --rc geninfo_unexecuted_blocks=1 00:09:42.464 00:09:42.464 ' 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.464 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:42.465 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:42.465 Cannot find device "nvmf_init_br" 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:42.465 Cannot find device "nvmf_init_br2" 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:42.465 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:42.724 Cannot find device "nvmf_tgt_br" 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:42.724 Cannot find device "nvmf_tgt_br2" 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:42.724 Cannot find device "nvmf_init_br" 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:42.724 Cannot find device "nvmf_init_br2" 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:42.724 Cannot find device "nvmf_tgt_br" 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:42.724 Cannot find device "nvmf_tgt_br2" 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:42.724 Cannot find device "nvmf_br" 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:42.724 Cannot find device "nvmf_init_if" 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:42.724 Cannot find device "nvmf_init_if2" 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:42.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.724 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:42.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:42.725 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:42.984 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:42.984 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:09:42.984 00:09:42.984 --- 10.0.0.3 ping statistics --- 00:09:42.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.984 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:42.984 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:42.984 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:09:42.984 00:09:42.984 --- 10.0.0.4 ping statistics --- 00:09:42.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.984 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:42.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:42.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:09:42.984 00:09:42.984 --- 10.0.0.1 ping statistics --- 00:09:42.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.984 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:42.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:42.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:09:42.984 00:09:42.984 --- 10.0.0.2 ping statistics --- 00:09:42.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:42.984 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=77302 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 77302 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 77302 ']' 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.984 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:42.984 [2024-11-19 16:04:49.650781] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:09:42.984 [2024-11-19 16:04:49.650895] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.244 [2024-11-19 16:04:49.817262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.244 [2024-11-19 16:04:49.840458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.244 [2024-11-19 16:04:49.840529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.244 [2024-11-19 16:04:49.840554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.244 [2024-11-19 16:04:49.840564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.244 [2024-11-19 16:04:49.840573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.244 [2024-11-19 16:04:49.840953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.244 [2024-11-19 16:04:49.874990] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.244 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.244 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:43.244 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:43.244 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.244 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.503 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.504 [2024-11-19 16:04:49.965209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.504 Malloc0 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.504 16:04:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.504 [2024-11-19 16:04:50.013170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=77327 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 77327 /var/tmp/bdevperf.sock 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 77327 ']' 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.504 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.504 [2024-11-19 16:04:50.080748] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:09:43.504 [2024-11-19 16:04:50.080865] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77327 ] 00:09:43.763 [2024-11-19 16:04:50.231594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.763 [2024-11-19 16:04:50.251973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.763 [2024-11-19 16:04:50.279817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:43.763 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.764 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:43.764 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:43.764 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.764 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:43.764 NVMe0n1 00:09:43.764 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.764 16:04:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:44.023 Running I/O for 10 seconds... 00:09:45.898 7374.00 IOPS, 28.80 MiB/s [2024-11-19T16:04:53.991Z] 7497.50 IOPS, 29.29 MiB/s [2024-11-19T16:04:54.559Z] 7904.00 IOPS, 30.88 MiB/s [2024-11-19T16:04:55.937Z] 8214.00 IOPS, 32.09 MiB/s [2024-11-19T16:04:56.875Z] 8142.40 IOPS, 31.81 MiB/s [2024-11-19T16:04:57.821Z] 8335.50 IOPS, 32.56 MiB/s [2024-11-19T16:04:58.772Z] 8475.86 IOPS, 33.11 MiB/s [2024-11-19T16:04:59.709Z] 8456.75 IOPS, 33.03 MiB/s [2024-11-19T16:05:00.645Z] 8376.89 IOPS, 32.72 MiB/s [2024-11-19T16:05:00.904Z] 8301.40 IOPS, 32.43 MiB/s 00:09:54.189 Latency(us) 00:09:54.189 [2024-11-19T16:05:00.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.189 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:54.189 Verification LBA range: start 0x0 length 0x4000 00:09:54.189 NVMe0n1 : 10.11 8308.99 32.46 0.00 0.00 122631.72 23473.80 91035.46 00:09:54.189 [2024-11-19T16:05:00.904Z] =================================================================================================================== 00:09:54.189 [2024-11-19T16:05:00.904Z] Total : 8308.99 32.46 0.00 0.00 122631.72 23473.80 91035.46 00:09:54.189 { 00:09:54.189 "results": [ 00:09:54.189 { 00:09:54.189 "job": "NVMe0n1", 00:09:54.189 "core_mask": "0x1", 00:09:54.189 "workload": "verify", 00:09:54.189 "status": "finished", 00:09:54.189 "verify_range": { 00:09:54.189 "start": 0, 00:09:54.189 "length": 16384 00:09:54.189 }, 00:09:54.189 "queue_depth": 1024, 00:09:54.189 "io_size": 4096, 00:09:54.189 "runtime": 10.11146, 00:09:54.189 "iops": 8308.98801953427, 00:09:54.189 "mibps": 32.45698445130574, 00:09:54.189 "io_failed": 0, 00:09:54.189 "io_timeout": 0, 00:09:54.189 "avg_latency_us": 122631.71975554439, 00:09:54.189 "min_latency_us": 23473.803636363635, 00:09:54.189 "max_latency_us": 91035.46181818182 00:09:54.189 } 00:09:54.189 ], 00:09:54.189 "core_count": 1 00:09:54.189 } 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 77327 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 77327 ']' 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 77327 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77327 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.189 killing process with pid 77327 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77327' 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 77327 00:09:54.189 Received shutdown signal, test time was about 10.000000 seconds 00:09:54.189 00:09:54.189 Latency(us) 00:09:54.189 [2024-11-19T16:05:00.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:54.189 [2024-11-19T16:05:00.904Z] =================================================================================================================== 00:09:54.189 [2024-11-19T16:05:00.904Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 77327 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:54.189 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.448 rmmod nvme_tcp 00:09:54.448 rmmod nvme_fabrics 00:09:54.448 rmmod nvme_keyring 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 77302 ']' 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 77302 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 77302 ']' 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 77302 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.448 16:05:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77302 00:09:54.448 killing process with pid 77302 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77302' 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 77302 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 77302 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:54.448 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:09:54.707 00:09:54.707 real 0m12.473s 00:09:54.707 user 0m21.264s 00:09:54.707 sys 0m2.117s 00:09:54.707 ************************************ 00:09:54.707 END TEST nvmf_queue_depth 00:09:54.707 ************************************ 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.707 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:54.966 16:05:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:54.966 16:05:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.966 16:05:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.966 16:05:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.966 ************************************ 00:09:54.966 START TEST nvmf_target_multipath 00:09:54.966 ************************************ 00:09:54.966 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:54.966 * Looking for test storage... 00:09:54.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:54.966 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:54.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.967 --rc genhtml_branch_coverage=1 00:09:54.967 --rc genhtml_function_coverage=1 00:09:54.967 --rc genhtml_legend=1 00:09:54.967 --rc geninfo_all_blocks=1 00:09:54.967 --rc geninfo_unexecuted_blocks=1 00:09:54.967 00:09:54.967 ' 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:54.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.967 --rc genhtml_branch_coverage=1 00:09:54.967 --rc genhtml_function_coverage=1 00:09:54.967 --rc genhtml_legend=1 00:09:54.967 --rc geninfo_all_blocks=1 00:09:54.967 --rc geninfo_unexecuted_blocks=1 00:09:54.967 00:09:54.967 ' 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:54.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.967 --rc genhtml_branch_coverage=1 00:09:54.967 --rc genhtml_function_coverage=1 00:09:54.967 --rc genhtml_legend=1 00:09:54.967 --rc geninfo_all_blocks=1 00:09:54.967 --rc geninfo_unexecuted_blocks=1 00:09:54.967 00:09:54.967 ' 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:54.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.967 --rc genhtml_branch_coverage=1 00:09:54.967 --rc genhtml_function_coverage=1 00:09:54.967 --rc genhtml_legend=1 00:09:54.967 --rc geninfo_all_blocks=1 00:09:54.967 --rc geninfo_unexecuted_blocks=1 00:09:54.967 00:09:54.967 ' 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.967 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.967 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.968 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:55.227 Cannot find device "nvmf_init_br" 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:55.227 Cannot find device "nvmf_init_br2" 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:55.227 Cannot find device "nvmf_tgt_br" 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:55.227 Cannot find device "nvmf_tgt_br2" 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:55.227 Cannot find device "nvmf_init_br" 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:55.227 Cannot find device "nvmf_init_br2" 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:55.227 Cannot find device "nvmf_tgt_br" 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:55.227 Cannot find device "nvmf_tgt_br2" 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:55.227 Cannot find device "nvmf_br" 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:55.227 Cannot find device "nvmf_init_if" 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:55.227 Cannot find device "nvmf_init_if2" 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:55.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:55.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:55.227 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:55.487 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:55.487 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:55.487 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:55.487 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:55.487 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:55.487 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:55.487 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:55.487 16:05:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:55.487 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:55.487 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:09:55.487 00:09:55.487 --- 10.0.0.3 ping statistics --- 00:09:55.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.487 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:55.487 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:55.487 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:09:55.487 00:09:55.487 --- 10.0.0.4 ping statistics --- 00:09:55.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.487 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:55.487 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.487 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:09:55.487 00:09:55.487 --- 10.0.0.1 ping statistics --- 00:09:55.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.487 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:55.487 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.487 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:09:55.487 00:09:55.487 --- 10.0.0.2 ping statistics --- 00:09:55.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.487 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=77694 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 77694 00:09:55.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 77694 ']' 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.487 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:55.487 [2024-11-19 16:05:02.130201] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:09:55.487 [2024-11-19 16:05:02.130291] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.746 [2024-11-19 16:05:02.284427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:55.746 [2024-11-19 16:05:02.311430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:55.746 [2024-11-19 16:05:02.311696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:55.746 [2024-11-19 16:05:02.311885] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:55.746 [2024-11-19 16:05:02.312160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:55.746 [2024-11-19 16:05:02.312369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:55.746 [2024-11-19 16:05:02.313366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.746 [2024-11-19 16:05:02.313444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.746 [2024-11-19 16:05:02.313486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.746 [2024-11-19 16:05:02.313490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.746 [2024-11-19 16:05:02.350939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:55.746 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.746 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:09:55.746 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:55.746 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:55.746 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:55.746 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:55.746 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:56.314 [2024-11-19 16:05:02.756284] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.314 16:05:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:09:56.573 Malloc0 00:09:56.573 16:05:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:09:56.831 16:05:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.089 16:05:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:57.347 [2024-11-19 16:05:03.981019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:57.347 16:05:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:09:57.605 [2024-11-19 16:05:04.273278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:09:57.605 16:05:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:09:57.864 16:05:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:09:57.864 16:05:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:09:57.864 16:05:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:09:57.864 16:05:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:57.864 16:05:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:57.864 16:05:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=77787 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:00.396 16:05:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:00.396 [global] 00:10:00.396 thread=1 00:10:00.396 invalidate=1 00:10:00.396 rw=randrw 00:10:00.396 time_based=1 00:10:00.396 runtime=6 00:10:00.396 ioengine=libaio 00:10:00.396 direct=1 00:10:00.396 bs=4096 00:10:00.396 iodepth=128 00:10:00.396 norandommap=0 00:10:00.396 numjobs=1 00:10:00.396 00:10:00.396 verify_dump=1 00:10:00.396 verify_backlog=512 00:10:00.396 verify_state_save=0 00:10:00.396 do_verify=1 00:10:00.396 verify=crc32c-intel 00:10:00.396 [job0] 00:10:00.396 filename=/dev/nvme0n1 00:10:00.396 Could not set queue depth (nvme0n1) 00:10:00.396 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:00.396 fio-3.35 00:10:00.396 Starting 1 thread 00:10:00.964 16:05:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:01.223 16:05:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:01.482 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:02.049 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:02.308 16:05:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 77787 00:10:06.496 00:10:06.496 job0: (groupid=0, jobs=1): err= 0: pid=77808: Tue Nov 19 16:05:12 2024 00:10:06.496 read: IOPS=9947, BW=38.9MiB/s (40.7MB/s)(233MiB/6007msec) 00:10:06.496 slat (usec): min=2, max=9554, avg=59.96, stdev=244.67 00:10:06.496 clat (usec): min=2004, max=18561, avg=8852.88, stdev=1665.45 00:10:06.496 lat (usec): min=2013, max=18596, avg=8912.84, stdev=1672.20 00:10:06.496 clat percentiles (usec): 00:10:06.496 | 1.00th=[ 4424], 5.00th=[ 6456], 10.00th=[ 7373], 20.00th=[ 7898], 00:10:06.496 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8979], 00:10:06.496 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10290], 95.00th=[12649], 00:10:06.496 | 99.00th=[14091], 99.50th=[14484], 99.90th=[15664], 99.95th=[16909], 00:10:06.496 | 99.99th=[18220] 00:10:06.496 bw ( KiB/s): min= 9776, max=24200, per=51.14%, avg=20350.67, stdev=4621.06, samples=12 00:10:06.496 iops : min= 2444, max= 6050, avg=5087.67, stdev=1155.26, samples=12 00:10:06.496 write: IOPS=5639, BW=22.0MiB/s (23.1MB/s)(120MiB/5427msec); 0 zone resets 00:10:06.496 slat (usec): min=4, max=1788, avg=68.09, stdev=159.92 00:10:06.496 clat (usec): min=1866, max=15639, avg=7604.21, stdev=1460.50 00:10:06.496 lat (usec): min=1897, max=15797, avg=7672.30, stdev=1465.70 00:10:06.496 clat percentiles (usec): 00:10:06.496 | 1.00th=[ 3589], 5.00th=[ 4359], 10.00th=[ 5473], 20.00th=[ 6980], 00:10:06.496 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8029], 00:10:06.496 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 9241], 00:10:06.496 | 99.00th=[11994], 99.50th=[12911], 99.90th=[14484], 99.95th=[15008], 00:10:06.496 | 99.99th=[15270] 00:10:06.496 bw ( KiB/s): min=10096, max=24160, per=90.28%, avg=20368.00, stdev=4346.58, samples=12 00:10:06.496 iops : min= 2524, max= 6040, avg=5092.00, stdev=1086.64, samples=12 00:10:06.496 lat (msec) : 2=0.01%, 4=1.23%, 10=89.42%, 20=9.35% 00:10:06.496 cpu : usr=5.46%, sys=21.86%, ctx=5200, majf=0, minf=72 00:10:06.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:06.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:06.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:06.496 issued rwts: total=59753,30608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:06.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:06.496 00:10:06.497 Run status group 0 (all jobs): 00:10:06.497 READ: bw=38.9MiB/s (40.7MB/s), 38.9MiB/s-38.9MiB/s (40.7MB/s-40.7MB/s), io=233MiB (245MB), run=6007-6007msec 00:10:06.497 WRITE: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=120MiB (125MB), run=5427-5427msec 00:10:06.497 00:10:06.497 Disk stats (read/write): 00:10:06.497 nvme0n1: ios=58903/30007, merge=0/0, ticks=496936/212857, in_queue=709793, util=98.68% 00:10:06.497 16:05:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:06.755 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=77890 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:07.014 16:05:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:07.014 [global] 00:10:07.014 thread=1 00:10:07.014 invalidate=1 00:10:07.014 rw=randrw 00:10:07.015 time_based=1 00:10:07.015 runtime=6 00:10:07.015 ioengine=libaio 00:10:07.015 direct=1 00:10:07.015 bs=4096 00:10:07.015 iodepth=128 00:10:07.015 norandommap=0 00:10:07.015 numjobs=1 00:10:07.015 00:10:07.015 verify_dump=1 00:10:07.015 verify_backlog=512 00:10:07.015 verify_state_save=0 00:10:07.015 do_verify=1 00:10:07.015 verify=crc32c-intel 00:10:07.015 [job0] 00:10:07.015 filename=/dev/nvme0n1 00:10:07.015 Could not set queue depth (nvme0n1) 00:10:07.015 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:07.015 fio-3.35 00:10:07.015 Starting 1 thread 00:10:07.951 16:05:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:08.210 16:05:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:08.776 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:09.341 16:05:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 77890 00:10:13.526 00:10:13.526 job0: (groupid=0, jobs=1): err= 0: pid=77911: Tue Nov 19 16:05:19 2024 00:10:13.526 read: IOPS=11.8k, BW=46.1MiB/s (48.3MB/s)(277MiB/6003msec) 00:10:13.526 slat (usec): min=2, max=5571, avg=42.69, stdev=176.93 00:10:13.526 clat (usec): min=248, max=14448, avg=7469.66, stdev=1684.68 00:10:13.526 lat (usec): min=279, max=14466, avg=7512.35, stdev=1696.74 00:10:13.526 clat percentiles (usec): 00:10:13.526 | 1.00th=[ 3458], 5.00th=[ 4621], 10.00th=[ 5211], 20.00th=[ 6128], 00:10:13.526 | 30.00th=[ 6915], 40.00th=[ 7373], 50.00th=[ 7635], 60.00th=[ 7898], 00:10:13.526 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8979], 95.00th=[10552], 00:10:13.526 | 99.00th=[12387], 99.50th=[12780], 99.90th=[13435], 99.95th=[13698], 00:10:13.526 | 99.99th=[14222] 00:10:13.526 bw ( KiB/s): min=11704, max=38024, per=52.13%, avg=24601.45, stdev=8158.95, samples=11 00:10:13.526 iops : min= 2926, max= 9506, avg=6150.36, stdev=2039.74, samples=11 00:10:13.526 write: IOPS=7005, BW=27.4MiB/s (28.7MB/s)(144MiB/5261msec); 0 zone resets 00:10:13.526 slat (usec): min=5, max=1881, avg=54.20, stdev=122.74 00:10:13.526 clat (usec): min=980, max=14204, avg=6318.24, stdev=1595.07 00:10:13.526 lat (usec): min=1008, max=14229, avg=6372.43, stdev=1607.27 00:10:13.526 clat percentiles (usec): 00:10:13.526 | 1.00th=[ 2835], 5.00th=[ 3523], 10.00th=[ 3949], 20.00th=[ 4686], 00:10:13.526 | 30.00th=[ 5473], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7046], 00:10:13.526 | 70.00th=[ 7308], 80.00th=[ 7570], 90.00th=[ 7963], 95.00th=[ 8356], 00:10:13.526 | 99.00th=[10290], 99.50th=[10945], 99.90th=[12256], 99.95th=[12911], 00:10:13.526 | 99.99th=[13566] 00:10:13.526 bw ( KiB/s): min=12288, max=37296, per=87.86%, avg=24618.91, stdev=7984.97, samples=11 00:10:13.526 iops : min= 3072, max= 9324, avg=6154.73, stdev=1996.24, samples=11 00:10:13.526 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:10:13.526 lat (msec) : 2=0.12%, 4=5.08%, 10=90.68%, 20=4.08% 00:10:13.526 cpu : usr=5.95%, sys=24.74%, ctx=6577, majf=0, minf=145 00:10:13.526 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:10:13.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:13.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:13.526 issued rwts: total=70822,36854,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:13.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:13.526 00:10:13.526 Run status group 0 (all jobs): 00:10:13.526 READ: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=277MiB (290MB), run=6003-6003msec 00:10:13.526 WRITE: bw=27.4MiB/s (28.7MB/s), 27.4MiB/s-27.4MiB/s (28.7MB/s-28.7MB/s), io=144MiB (151MB), run=5261-5261msec 00:10:13.526 00:10:13.526 Disk stats (read/write): 00:10:13.526 nvme0n1: ios=70091/36037, merge=0/0, ticks=492892/206461, in_queue=699353, util=98.62% 00:10:13.526 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:13.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:13.526 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:13.526 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:10:13.526 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:13.526 16:05:19 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.526 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.526 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:13.526 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:10:13.526 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.900 rmmod nvme_tcp 00:10:13.900 rmmod nvme_fabrics 00:10:13.900 rmmod nvme_keyring 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 77694 ']' 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 77694 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 77694 ']' 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 77694 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77694 00:10:13.900 killing process with pid 77694 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.900 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77694' 00:10:13.901 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 77694 00:10:13.901 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 77694 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:14.187 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:14.188 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:14.188 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:14.188 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.188 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.188 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:14.188 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.188 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.188 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.188 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:14.188 00:10:14.188 real 0m19.399s 00:10:14.188 user 1m11.652s 00:10:14.188 sys 0m10.458s 00:10:14.188 ************************************ 00:10:14.188 END TEST nvmf_target_multipath 00:10:14.188 ************************************ 00:10:14.188 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.188 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:14.448 16:05:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:14.448 16:05:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:14.448 16:05:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.448 16:05:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:14.448 ************************************ 00:10:14.448 START TEST nvmf_zcopy 00:10:14.448 ************************************ 00:10:14.448 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:14.448 * Looking for test storage... 00:10:14.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:14.448 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:14.448 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:14.448 16:05:20 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:14.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.448 --rc genhtml_branch_coverage=1 00:10:14.448 --rc genhtml_function_coverage=1 00:10:14.448 --rc genhtml_legend=1 00:10:14.448 --rc geninfo_all_blocks=1 00:10:14.448 --rc geninfo_unexecuted_blocks=1 00:10:14.448 00:10:14.448 ' 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:14.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.448 --rc genhtml_branch_coverage=1 00:10:14.448 --rc genhtml_function_coverage=1 00:10:14.448 --rc genhtml_legend=1 00:10:14.448 --rc geninfo_all_blocks=1 00:10:14.448 --rc geninfo_unexecuted_blocks=1 00:10:14.448 00:10:14.448 ' 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:14.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.448 --rc genhtml_branch_coverage=1 00:10:14.448 --rc genhtml_function_coverage=1 00:10:14.448 --rc genhtml_legend=1 00:10:14.448 --rc geninfo_all_blocks=1 00:10:14.448 --rc geninfo_unexecuted_blocks=1 00:10:14.448 00:10:14.448 ' 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:14.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.448 --rc genhtml_branch_coverage=1 00:10:14.448 --rc genhtml_function_coverage=1 00:10:14.448 --rc genhtml_legend=1 00:10:14.448 --rc geninfo_all_blocks=1 00:10:14.448 --rc geninfo_unexecuted_blocks=1 00:10:14.448 00:10:14.448 ' 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.448 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:14.449 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:14.449 Cannot find device "nvmf_init_br" 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:14.449 Cannot find device "nvmf_init_br2" 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:14.449 Cannot find device "nvmf_tgt_br" 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:14.449 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:14.709 Cannot find device "nvmf_tgt_br2" 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:14.709 Cannot find device "nvmf_init_br" 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:14.709 Cannot find device "nvmf_init_br2" 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:14.709 Cannot find device "nvmf_tgt_br" 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:14.709 Cannot find device "nvmf_tgt_br2" 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:14.709 Cannot find device "nvmf_br" 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:14.709 Cannot find device "nvmf_init_if" 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:14.709 Cannot find device "nvmf_init_if2" 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:14.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:14.709 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:14.709 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:14.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:14.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:10:14.969 00:10:14.969 --- 10.0.0.3 ping statistics --- 00:10:14.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.969 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:14.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:14.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:10:14.969 00:10:14.969 --- 10.0.0.4 ping statistics --- 00:10:14.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.969 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:14.969 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.969 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:10:14.969 00:10:14.969 --- 10.0.0.1 ping statistics --- 00:10:14.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.969 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:14.969 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.969 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:10:14.969 00:10:14.969 --- 10.0.0.2 ping statistics --- 00:10:14.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.969 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=78215 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 78215 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 78215 ']' 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.969 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.969 [2024-11-19 16:05:21.600664] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:10:14.969 [2024-11-19 16:05:21.600781] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.229 [2024-11-19 16:05:21.750970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.229 [2024-11-19 16:05:21.774374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.229 [2024-11-19 16:05:21.774436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.229 [2024-11-19 16:05:21.774458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.229 [2024-11-19 16:05:21.774469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.229 [2024-11-19 16:05:21.774477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.229 [2024-11-19 16:05:21.774862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.229 [2024-11-19 16:05:21.808558] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.229 [2024-11-19 16:05:21.923113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.229 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.229 [2024-11-19 16:05:21.939298] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.488 malloc0 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:15.488 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:15.489 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:15.489 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:15.489 { 00:10:15.489 "params": { 00:10:15.489 "name": "Nvme$subsystem", 00:10:15.489 "trtype": "$TEST_TRANSPORT", 00:10:15.489 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:15.489 "adrfam": "ipv4", 00:10:15.489 "trsvcid": "$NVMF_PORT", 00:10:15.489 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:15.489 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:15.489 "hdgst": ${hdgst:-false}, 00:10:15.489 "ddgst": ${ddgst:-false} 00:10:15.489 }, 00:10:15.489 "method": "bdev_nvme_attach_controller" 00:10:15.489 } 00:10:15.489 EOF 00:10:15.489 )") 00:10:15.489 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:15.489 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:15.489 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:15.489 16:05:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:15.489 "params": { 00:10:15.489 "name": "Nvme1", 00:10:15.489 "trtype": "tcp", 00:10:15.489 "traddr": "10.0.0.3", 00:10:15.489 "adrfam": "ipv4", 00:10:15.489 "trsvcid": "4420", 00:10:15.489 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:15.489 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:15.489 "hdgst": false, 00:10:15.489 "ddgst": false 00:10:15.489 }, 00:10:15.489 "method": "bdev_nvme_attach_controller" 00:10:15.489 }' 00:10:15.489 [2024-11-19 16:05:22.031503] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:10:15.489 [2024-11-19 16:05:22.031603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78241 ] 00:10:15.489 [2024-11-19 16:05:22.185895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.748 [2024-11-19 16:05:22.210052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.748 [2024-11-19 16:05:22.250927] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:15.748 Running I/O for 10 seconds... 00:10:18.059 6356.00 IOPS, 49.66 MiB/s [2024-11-19T16:05:25.711Z] 6509.00 IOPS, 50.85 MiB/s [2024-11-19T16:05:26.648Z] 6575.00 IOPS, 51.37 MiB/s [2024-11-19T16:05:27.584Z] 6587.75 IOPS, 51.47 MiB/s [2024-11-19T16:05:28.521Z] 6553.40 IOPS, 51.20 MiB/s [2024-11-19T16:05:29.459Z] 6550.17 IOPS, 51.17 MiB/s [2024-11-19T16:05:30.398Z] 6489.86 IOPS, 50.70 MiB/s [2024-11-19T16:05:31.776Z] 6484.50 IOPS, 50.66 MiB/s [2024-11-19T16:05:32.713Z] 6473.89 IOPS, 50.58 MiB/s [2024-11-19T16:05:32.713Z] 6471.10 IOPS, 50.56 MiB/s 00:10:25.998 Latency(us) 00:10:25.998 [2024-11-19T16:05:32.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.998 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:25.998 Verification LBA range: start 0x0 length 0x1000 00:10:25.998 Nvme1n1 : 10.02 6472.63 50.57 0.00 0.00 19711.87 1772.45 33125.47 00:10:25.998 [2024-11-19T16:05:32.713Z] =================================================================================================================== 00:10:25.998 [2024-11-19T16:05:32.713Z] Total : 6472.63 50.57 0.00 0.00 19711.87 1772.45 33125.47 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=78358 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:25.998 { 00:10:25.998 "params": { 00:10:25.998 "name": "Nvme$subsystem", 00:10:25.998 "trtype": "$TEST_TRANSPORT", 00:10:25.998 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:25.998 "adrfam": "ipv4", 00:10:25.998 "trsvcid": "$NVMF_PORT", 00:10:25.998 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:25.998 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:25.998 "hdgst": ${hdgst:-false}, 00:10:25.998 "ddgst": ${ddgst:-false} 00:10:25.998 }, 00:10:25.998 "method": "bdev_nvme_attach_controller" 00:10:25.998 } 00:10:25.998 EOF 00:10:25.998 )") 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:25.998 [2024-11-19 16:05:32.491019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.491079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:25.998 16:05:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:25.998 "params": { 00:10:25.998 "name": "Nvme1", 00:10:25.998 "trtype": "tcp", 00:10:25.998 "traddr": "10.0.0.3", 00:10:25.998 "adrfam": "ipv4", 00:10:25.998 "trsvcid": "4420", 00:10:25.998 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:25.998 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:25.998 "hdgst": false, 00:10:25.998 "ddgst": false 00:10:25.998 }, 00:10:25.998 "method": "bdev_nvme_attach_controller" 00:10:25.998 }' 00:10:25.998 [2024-11-19 16:05:32.502971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.503015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.514967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.515008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.526973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.527015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.531453] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:10:25.998 [2024-11-19 16:05:32.531536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78358 ] 00:10:25.998 [2024-11-19 16:05:32.538973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.539015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.550985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.551031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.562976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.563018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.574983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.575025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.586983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.587024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.598982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.599024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.610985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.611026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.622985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.623026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.634988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.635029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.646991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.647033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.658997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.659039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.671001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.671045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.677041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.998 [2024-11-19 16:05:32.683027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.683077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.695024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.695073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.998 [2024-11-19 16:05:32.697613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.998 [2024-11-19 16:05:32.707007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.998 [2024-11-19 16:05:32.707033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.719044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.719081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.731047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.731086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.735938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:26.258 [2024-11-19 16:05:32.743048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.743080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.755034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.755067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.767060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.767095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.779050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.779081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.791056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.791089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.803070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.803101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.815073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.815100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.827132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.827167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 Running I/O for 5 seconds... 00:10:26.258 [2024-11-19 16:05:32.841903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.841937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.856755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.856788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.873204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.873264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.888634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.888686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.258 [2024-11-19 16:05:32.898871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.258 [2024-11-19 16:05:32.898936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.259 [2024-11-19 16:05:32.914766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.259 [2024-11-19 16:05:32.914799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.259 [2024-11-19 16:05:32.932230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.259 [2024-11-19 16:05:32.932320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.259 [2024-11-19 16:05:32.947879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.259 [2024-11-19 16:05:32.947930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.259 [2024-11-19 16:05:32.966122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.259 [2024-11-19 16:05:32.966172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:32.980569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:32.980622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:32.991999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:32.992050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.008646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.008711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.024523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.024575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.042979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.043028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.056824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.056874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.072081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.072131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.091893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.091943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.105689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.105739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.121256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.121317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.138503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.138541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.155026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.155074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.171354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.171401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.188152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.188201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.206814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.206865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.518 [2024-11-19 16:05:33.222198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.518 [2024-11-19 16:05:33.222233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.238970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.239003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.256184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.256217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.271895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.271929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.286925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.286959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.303017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.303049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.319761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.319795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.335052] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.335086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.344390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.344439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.359961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.359994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.370013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.370047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.385034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.385068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.401402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.401437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.411402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.411438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.426765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.426797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.437891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.777 [2024-11-19 16:05:33.437940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.777 [2024-11-19 16:05:33.452783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.778 [2024-11-19 16:05:33.452835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.778 [2024-11-19 16:05:33.469290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.778 [2024-11-19 16:05:33.469340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.778 [2024-11-19 16:05:33.487703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.778 [2024-11-19 16:05:33.487754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.502262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.502350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.519131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.519181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.533498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.533549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.548369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.548416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.564216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.564296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.582358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.582411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.597304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.597353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.613059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.613109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.631577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.631611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.645742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.645792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.661785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.661835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.679627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.679677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.694166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.694216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.711302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.711383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.727117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.727166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.037 [2024-11-19 16:05:33.745122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.037 [2024-11-19 16:05:33.745187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.759749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.759798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.775097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.775146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.784662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.784712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.800037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.800087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.814874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.814939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.831150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.831200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 11952.00 IOPS, 93.38 MiB/s [2024-11-19T16:05:34.011Z] [2024-11-19 16:05:33.846719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.846767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.855940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.855988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.871272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.871332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.887465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.887514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.904145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.904194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.919904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.919955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.930442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.930485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.296 [2024-11-19 16:05:33.945545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.296 [2024-11-19 16:05:33.945582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.297 [2024-11-19 16:05:33.962960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.297 [2024-11-19 16:05:33.963009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.297 [2024-11-19 16:05:33.979622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.297 [2024-11-19 16:05:33.979672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.297 [2024-11-19 16:05:33.997649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.297 [2024-11-19 16:05:33.997698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.012791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.012827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.029267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.029367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.046175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.046224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.061928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.061977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.080635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.080684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.095652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.095701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.113646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.113695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.129807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.129858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.147598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.147636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.162693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.162730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.172810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.172861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.188652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.188703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.206326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.206376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.222933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.222983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.240272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.240332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.556 [2024-11-19 16:05:34.256128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.556 [2024-11-19 16:05:34.256178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.272050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.272100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.281251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.281299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.297207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.297265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.314349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.314401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.331083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.331132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.347030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.347078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.364002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.364053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.379973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.380025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.396864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.396946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.413233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.413314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.429267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.429348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.448679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.448716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.464305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.464389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.480812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.480864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.497508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.497557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.816 [2024-11-19 16:05:34.514080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.816 [2024-11-19 16:05:34.514130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.531816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.531865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.547154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.547203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.563532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.563580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.580456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.580505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.596183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.596232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.614059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.614109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.629330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.629367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.646351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.646389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.661803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.661870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.672153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.672205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.688304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.688364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.703869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.703934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.719470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.719502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.728711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.728762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.744099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.744151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.753909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.753957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.767971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.768020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.075 [2024-11-19 16:05:34.784157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.075 [2024-11-19 16:05:34.784207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.802465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.802517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.817242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.817321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 11985.50 IOPS, 93.64 MiB/s [2024-11-19T16:05:35.050Z] [2024-11-19 16:05:34.834401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.834440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.849596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.849661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.860841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.860890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.877948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.877998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.892954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.893004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.910067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.910116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.927535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.927584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.943411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.943463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.953667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.953706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.969846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.969910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:34.985457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:34.985506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:35.001249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:35.001297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:35.020568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:35.020617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:35.035088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:35.035137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.335 [2024-11-19 16:05:35.044656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.335 [2024-11-19 16:05:35.044709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.060502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.060552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.077575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.077625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.092784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.092833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.101943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.101992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.116774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.116824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.132929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.132978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.150770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.150820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.165962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.166013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.176072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.176124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.191209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.191286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.200917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.200966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.215294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.215361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.231233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.231295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.249431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.249479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.264931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.264981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.282377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.282428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.595 [2024-11-19 16:05:35.297822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.595 [2024-11-19 16:05:35.297871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.314713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.314761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.331842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.331895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.348266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.348331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.357363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.357411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.373527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.373578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.385693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.385731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.401785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.401837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.419446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.419529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.433963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.434012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.449585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.449635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.468208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.468267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.482098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.482148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.497194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.497268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.507627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.507678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.522569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.522607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.537381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.537430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.553422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.553471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.855 [2024-11-19 16:05:35.563023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.855 [2024-11-19 16:05:35.563072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.114 [2024-11-19 16:05:35.579069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.114 [2024-11-19 16:05:35.579119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.114 [2024-11-19 16:05:35.588196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.114 [2024-11-19 16:05:35.588287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.114 [2024-11-19 16:05:35.604065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.114 [2024-11-19 16:05:35.604114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.114 [2024-11-19 16:05:35.619067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.114 [2024-11-19 16:05:35.619115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.114 [2024-11-19 16:05:35.635274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.114 [2024-11-19 16:05:35.635336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.114 [2024-11-19 16:05:35.651619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.114 [2024-11-19 16:05:35.651669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.114 [2024-11-19 16:05:35.669753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.114 [2024-11-19 16:05:35.669802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.114 [2024-11-19 16:05:35.685493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.114 [2024-11-19 16:05:35.685542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.115 [2024-11-19 16:05:35.703081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.115 [2024-11-19 16:05:35.703130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.115 [2024-11-19 16:05:35.719775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.115 [2024-11-19 16:05:35.719824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.115 [2024-11-19 16:05:35.735423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.115 [2024-11-19 16:05:35.735473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.115 [2024-11-19 16:05:35.747660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.115 [2024-11-19 16:05:35.747711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.115 [2024-11-19 16:05:35.764057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.115 [2024-11-19 16:05:35.764093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.115 [2024-11-19 16:05:35.779827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.115 [2024-11-19 16:05:35.779880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.115 [2024-11-19 16:05:35.797145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.115 [2024-11-19 16:05:35.797196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.115 [2024-11-19 16:05:35.814202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.115 [2024-11-19 16:05:35.814309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:35.830490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.830527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 12035.33 IOPS, 94.03 MiB/s [2024-11-19T16:05:36.089Z] [2024-11-19 16:05:35.847181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.847231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:35.862712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.862763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:35.872066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.872117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:35.888444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.888494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:35.899947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.899997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:35.916412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.916461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:35.933101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.933149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:35.948817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.948866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:35.966081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.966131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:35.981037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.981068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:35.998920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:35.998971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:36.013472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:36.013522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:36.030025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:36.030074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:36.045370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:36.045419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:36.061332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:36.061380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.374 [2024-11-19 16:05:36.078609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.374 [2024-11-19 16:05:36.078676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.093121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.093169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.109171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.109220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.126363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.126400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.144130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.144179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.160106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.160155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.176942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.176992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.193615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.193651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.208831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.208935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.225049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.225122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.242955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.243028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.258484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.258548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.276911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.276987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.291406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.291481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.306373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.306442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.322436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.322503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.634 [2024-11-19 16:05:36.337939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.634 [2024-11-19 16:05:36.337988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.356217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.356279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.372123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.372172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.388893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.388942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.405798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.405848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.422593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.422677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.438968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.439016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.456673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.456724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.471449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.471498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.487392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.487438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.503598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.503646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.521452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.521499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.535917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.535964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.551135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.551182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.562218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.562342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.578240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.578342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.894 [2024-11-19 16:05:36.595848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.894 [2024-11-19 16:05:36.595895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.612967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.613017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.628807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.628855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.646514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.646564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.663200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.663232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.681080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.681129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.696319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.696354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.713205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.713304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.729570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.729629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.746030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.746081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.764822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.764872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.779725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.779774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.789201] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.789293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.804906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.804976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.821756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.821832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 12139.25 IOPS, 94.84 MiB/s [2024-11-19T16:05:36.869Z] [2024-11-19 16:05:36.838828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.838896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.154 [2024-11-19 16:05:36.853613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.154 [2024-11-19 16:05:36.853701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:36.871936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:36.871997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:36.886089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:36.886160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:36.901010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:36.901081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:36.917777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:36.917849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:36.934123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:36.934197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:36.953088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:36.953150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:36.968207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:36.968265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:36.977507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:36.977572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:36.993813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:36.993863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:37.011342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:37.011379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:37.025714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:37.025764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:37.041918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:37.041966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:37.059857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:37.059907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:37.075163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:37.075216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:37.084429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:37.084478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:37.099642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:37.099694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:37.114765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:37.114815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.414 [2024-11-19 16:05:37.124083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.414 [2024-11-19 16:05:37.124132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.140068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.140118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.155222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.155301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.166234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.166340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.182129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.182179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.199688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.199751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.215559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.215596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.232664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.232718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.249895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.249945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.267173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.267223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.281193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.281251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.297143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.297191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.313489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.313538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.332177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.332208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.346530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.346583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.362708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.362771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.674 [2024-11-19 16:05:37.378739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.674 [2024-11-19 16:05:37.378786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.395645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.395695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.412789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.412837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.428006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.428055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.443245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.443324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.461196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.461270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.476177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.476228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.491943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.491994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.512972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.513022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.527367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.527416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.544151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.544201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.559436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.559488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.568438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.568487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.584512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.584576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.596325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.596385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.613085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.933 [2024-11-19 16:05:37.613134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.933 [2024-11-19 16:05:37.629670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.934 [2024-11-19 16:05:37.629722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.648508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.648561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.663505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.663556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.673030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.673080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.689118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.689167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.705242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.705299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.723175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.723224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.738765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.738812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.749235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.749301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.764224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.764284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.779996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.780046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.796185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.796263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.811928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.811977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 [2024-11-19 16:05:37.827199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.827273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 12157.60 IOPS, 94.98 MiB/s [2024-11-19T16:05:37.908Z] [2024-11-19 16:05:37.836716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.193 [2024-11-19 16:05:37.836765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.193 00:10:31.193 Latency(us) 00:10:31.193 [2024-11-19T16:05:37.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.193 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:31.193 Nvme1n1 : 5.01 12158.37 94.99 0.00 0.00 10514.63 4289.63 18350.08 00:10:31.193 [2024-11-19T16:05:37.908Z] =================================================================================================================== 00:10:31.193 [2024-11-19T16:05:37.909Z] Total : 12158.37 94.99 0.00 0.00 10514.63 4289.63 18350.08 00:10:31.194 [2024-11-19 16:05:37.848291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 16:05:37.848367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 16:05:37.860285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 16:05:37.860343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 16:05:37.872332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 16:05:37.872398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 16:05:37.884333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 16:05:37.884396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.194 [2024-11-19 16:05:37.896329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.194 [2024-11-19 16:05:37.896388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.453 [2024-11-19 16:05:37.908346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.453 [2024-11-19 16:05:37.908415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.453 [2024-11-19 16:05:37.920333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.453 [2024-11-19 16:05:37.920390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.453 [2024-11-19 16:05:37.932337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.453 [2024-11-19 16:05:37.932384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.453 [2024-11-19 16:05:37.944337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.453 [2024-11-19 16:05:37.944390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.453 [2024-11-19 16:05:37.956328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.453 [2024-11-19 16:05:37.956374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.453 [2024-11-19 16:05:37.968367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.453 [2024-11-19 16:05:37.968401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.453 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (78358) - No such process 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 78358 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.453 delay0 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.453 16:05:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:31.712 [2024-11-19 16:05:38.176573] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:38.343 Initializing NVMe Controllers 00:10:38.343 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:38.343 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:38.343 Initialization complete. Launching workers. 00:10:38.343 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 138 00:10:38.343 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 425, failed to submit 33 00:10:38.343 success 300, unsuccessful 125, failed 0 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:38.343 rmmod nvme_tcp 00:10:38.343 rmmod nvme_fabrics 00:10:38.343 rmmod nvme_keyring 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:38.343 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 78215 ']' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 78215 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 78215 ']' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 78215 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78215 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:38.344 killing process with pid 78215 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78215' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 78215 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 78215 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:38.344 ************************************ 00:10:38.344 END TEST nvmf_zcopy 00:10:38.344 ************************************ 00:10:38.344 00:10:38.344 real 0m23.824s 00:10:38.344 user 0m38.810s 00:10:38.344 sys 0m6.746s 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.344 ************************************ 00:10:38.344 START TEST nvmf_nmic 00:10:38.344 ************************************ 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:38.344 * Looking for test storage... 00:10:38.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:38.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.344 --rc genhtml_branch_coverage=1 00:10:38.344 --rc genhtml_function_coverage=1 00:10:38.344 --rc genhtml_legend=1 00:10:38.344 --rc geninfo_all_blocks=1 00:10:38.344 --rc geninfo_unexecuted_blocks=1 00:10:38.344 00:10:38.344 ' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:38.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.344 --rc genhtml_branch_coverage=1 00:10:38.344 --rc genhtml_function_coverage=1 00:10:38.344 --rc genhtml_legend=1 00:10:38.344 --rc geninfo_all_blocks=1 00:10:38.344 --rc geninfo_unexecuted_blocks=1 00:10:38.344 00:10:38.344 ' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:38.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.344 --rc genhtml_branch_coverage=1 00:10:38.344 --rc genhtml_function_coverage=1 00:10:38.344 --rc genhtml_legend=1 00:10:38.344 --rc geninfo_all_blocks=1 00:10:38.344 --rc geninfo_unexecuted_blocks=1 00:10:38.344 00:10:38.344 ' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:38.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.344 --rc genhtml_branch_coverage=1 00:10:38.344 --rc genhtml_function_coverage=1 00:10:38.344 --rc genhtml_legend=1 00:10:38.344 --rc geninfo_all_blocks=1 00:10:38.344 --rc geninfo_unexecuted_blocks=1 00:10:38.344 00:10:38.344 ' 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.344 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.345 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:38.345 16:05:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:38.345 Cannot find device "nvmf_init_br" 00:10:38.345 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:38.345 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:38.345 Cannot find device "nvmf_init_br2" 00:10:38.345 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:38.345 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:38.345 Cannot find device "nvmf_tgt_br" 00:10:38.345 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:38.345 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:38.345 Cannot find device "nvmf_tgt_br2" 00:10:38.345 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:38.345 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:38.605 Cannot find device "nvmf_init_br" 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:38.605 Cannot find device "nvmf_init_br2" 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:38.605 Cannot find device "nvmf_tgt_br" 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:38.605 Cannot find device "nvmf_tgt_br2" 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:38.605 Cannot find device "nvmf_br" 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:38.605 Cannot find device "nvmf_init_if" 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:38.605 Cannot find device "nvmf_init_if2" 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.605 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:38.605 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:38.606 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:38.606 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:38.606 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:38.606 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:38.606 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:38.606 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:38.606 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:38.606 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:38.606 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:38.606 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:38.606 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:38.865 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:38.865 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:38.865 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:38.865 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:38.865 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:38.865 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:38.865 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:38.865 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:38.865 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:38.865 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:10:38.865 00:10:38.865 --- 10.0.0.3 ping statistics --- 00:10:38.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.865 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:38.865 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:38.865 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:38.865 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.077 ms 00:10:38.865 00:10:38.865 --- 10.0.0.4 ping statistics --- 00:10:38.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.865 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:10:38.865 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:38.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:10:38.865 00:10:38.865 --- 10.0.0.1 ping statistics --- 00:10:38.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.865 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:10:38.865 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:38.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:10:38.866 00:10:38.866 --- 10.0.0.2 ping statistics --- 00:10:38.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.866 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=78739 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 78739 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 78739 ']' 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.866 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:38.866 [2024-11-19 16:05:45.475860] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:10:38.866 [2024-11-19 16:05:45.475952] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.125 [2024-11-19 16:05:45.628442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.125 [2024-11-19 16:05:45.654664] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.125 [2024-11-19 16:05:45.654955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.125 [2024-11-19 16:05:45.655079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.125 [2024-11-19 16:05:45.655175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.125 [2024-11-19 16:05:45.655283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.125 [2024-11-19 16:05:45.656308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.125 [2024-11-19 16:05:45.656398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.125 [2024-11-19 16:05:45.658322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.125 [2024-11-19 16:05:45.658340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.125 [2024-11-19 16:05:45.691003] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.125 [2024-11-19 16:05:45.805860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.125 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.385 Malloc0 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.385 [2024-11-19 16:05:45.861992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.385 test case1: single bdev can't be used in multiple subsystems 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.385 [2024-11-19 16:05:45.885874] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:39.385 [2024-11-19 16:05:45.886114] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:39.385 [2024-11-19 16:05:45.886194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.385 request: 00:10:39.385 { 00:10:39.385 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:39.385 "namespace": { 00:10:39.385 "bdev_name": "Malloc0", 00:10:39.385 "no_auto_visible": false 00:10:39.385 }, 00:10:39.385 "method": "nvmf_subsystem_add_ns", 00:10:39.385 "req_id": 1 00:10:39.385 } 00:10:39.385 Got JSON-RPC error response 00:10:39.385 response: 00:10:39.385 { 00:10:39.385 "code": -32602, 00:10:39.385 "message": "Invalid parameters" 00:10:39.385 } 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:39.385 Adding namespace failed - expected result. 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:39.385 test case2: host connect to nvmf target in multiple paths 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:39.385 [2024-11-19 16:05:45.898020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.385 16:05:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:39.385 16:05:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:39.644 16:05:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:39.644 16:05:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:39.644 16:05:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.644 16:05:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:39.644 16:05:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:41.546 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:41.546 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.546 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:41.546 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:41.546 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.546 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:41.546 16:05:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:41.546 [global] 00:10:41.546 thread=1 00:10:41.546 invalidate=1 00:10:41.546 rw=write 00:10:41.546 time_based=1 00:10:41.546 runtime=1 00:10:41.546 ioengine=libaio 00:10:41.546 direct=1 00:10:41.546 bs=4096 00:10:41.546 iodepth=1 00:10:41.546 norandommap=0 00:10:41.546 numjobs=1 00:10:41.546 00:10:41.546 verify_dump=1 00:10:41.546 verify_backlog=512 00:10:41.546 verify_state_save=0 00:10:41.546 do_verify=1 00:10:41.546 verify=crc32c-intel 00:10:41.546 [job0] 00:10:41.546 filename=/dev/nvme0n1 00:10:41.546 Could not set queue depth (nvme0n1) 00:10:41.804 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.804 fio-3.35 00:10:41.804 Starting 1 thread 00:10:43.178 00:10:43.178 job0: (groupid=0, jobs=1): err= 0: pid=78823: Tue Nov 19 16:05:49 2024 00:10:43.178 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:43.178 slat (nsec): min=12298, max=62988, avg=15600.87, stdev=4854.26 00:10:43.178 clat (usec): min=132, max=831, avg=172.01, stdev=24.96 00:10:43.178 lat (usec): min=145, max=850, avg=187.61, stdev=25.80 00:10:43.178 clat percentiles (usec): 00:10:43.179 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:10:43.179 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 174], 00:10:43.179 | 70.00th=[ 180], 80.00th=[ 190], 90.00th=[ 202], 95.00th=[ 215], 00:10:43.179 | 99.00th=[ 235], 99.50th=[ 241], 99.90th=[ 251], 99.95th=[ 269], 00:10:43.179 | 99.99th=[ 832] 00:10:43.179 write: IOPS=3164, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec); 0 zone resets 00:10:43.179 slat (nsec): min=15645, max=94445, avg=22914.51, stdev=6872.26 00:10:43.179 clat (usec): min=77, max=564, avg=107.28, stdev=21.56 00:10:43.179 lat (usec): min=95, max=588, avg=130.19, stdev=23.53 00:10:43.179 clat percentiles (usec): 00:10:43.179 | 1.00th=[ 82], 5.00th=[ 87], 10.00th=[ 90], 20.00th=[ 94], 00:10:43.179 | 30.00th=[ 97], 40.00th=[ 100], 50.00th=[ 103], 60.00th=[ 108], 00:10:43.179 | 70.00th=[ 113], 80.00th=[ 120], 90.00th=[ 131], 95.00th=[ 143], 00:10:43.179 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 251], 99.95th=[ 523], 00:10:43.179 | 99.99th=[ 562] 00:10:43.179 bw ( KiB/s): min=12288, max=12288, per=97.07%, avg=12288.00, stdev= 0.00, samples=1 00:10:43.179 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:43.179 lat (usec) : 100=21.17%, 250=78.72%, 500=0.06%, 750=0.03%, 1000=0.02% 00:10:43.179 cpu : usr=2.70%, sys=9.10%, ctx=6242, majf=0, minf=5 00:10:43.179 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.179 issued rwts: total=3072,3168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.179 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.179 00:10:43.179 Run status group 0 (all jobs): 00:10:43.179 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:43.179 WRITE: bw=12.4MiB/s (13.0MB/s), 12.4MiB/s-12.4MiB/s (13.0MB/s-13.0MB/s), io=12.4MiB (13.0MB), run=1001-1001msec 00:10:43.179 00:10:43.179 Disk stats (read/write): 00:10:43.179 nvme0n1: ios=2641/3072, merge=0/0, ticks=496/377, in_queue=873, util=91.38% 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:43.179 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.179 rmmod nvme_tcp 00:10:43.179 rmmod nvme_fabrics 00:10:43.179 rmmod nvme_keyring 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 78739 ']' 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 78739 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 78739 ']' 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 78739 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78739 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.179 killing process with pid 78739 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78739' 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 78739 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 78739 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:43.179 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:43.438 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:43.438 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:43.438 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:43.438 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:43.438 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:43.438 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:43.438 16:05:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:43.438 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:43.438 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:43.438 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:43.438 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:43.438 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:43.438 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.438 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.438 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.438 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:43.438 00:10:43.438 real 0m5.336s 00:10:43.438 user 0m15.589s 00:10:43.438 sys 0m2.310s 00:10:43.438 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.438 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.438 ************************************ 00:10:43.438 END TEST nvmf_nmic 00:10:43.438 ************************************ 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.698 ************************************ 00:10:43.698 START TEST nvmf_fio_target 00:10:43.698 ************************************ 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:43.698 * Looking for test storage... 00:10:43.698 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.698 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.699 --rc genhtml_branch_coverage=1 00:10:43.699 --rc genhtml_function_coverage=1 00:10:43.699 --rc genhtml_legend=1 00:10:43.699 --rc geninfo_all_blocks=1 00:10:43.699 --rc geninfo_unexecuted_blocks=1 00:10:43.699 00:10:43.699 ' 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.699 --rc genhtml_branch_coverage=1 00:10:43.699 --rc genhtml_function_coverage=1 00:10:43.699 --rc genhtml_legend=1 00:10:43.699 --rc geninfo_all_blocks=1 00:10:43.699 --rc geninfo_unexecuted_blocks=1 00:10:43.699 00:10:43.699 ' 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.699 --rc genhtml_branch_coverage=1 00:10:43.699 --rc genhtml_function_coverage=1 00:10:43.699 --rc genhtml_legend=1 00:10:43.699 --rc geninfo_all_blocks=1 00:10:43.699 --rc geninfo_unexecuted_blocks=1 00:10:43.699 00:10:43.699 ' 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:43.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.699 --rc genhtml_branch_coverage=1 00:10:43.699 --rc genhtml_function_coverage=1 00:10:43.699 --rc genhtml_legend=1 00:10:43.699 --rc geninfo_all_blocks=1 00:10:43.699 --rc geninfo_unexecuted_blocks=1 00:10:43.699 00:10:43.699 ' 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.699 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.700 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.700 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:43.959 Cannot find device "nvmf_init_br" 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:43.959 Cannot find device "nvmf_init_br2" 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:43.959 Cannot find device "nvmf_tgt_br" 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:43.959 Cannot find device "nvmf_tgt_br2" 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:43.959 Cannot find device "nvmf_init_br" 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:43.959 Cannot find device "nvmf_init_br2" 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:43.959 Cannot find device "nvmf_tgt_br" 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:43.959 Cannot find device "nvmf_tgt_br2" 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:43.959 Cannot find device "nvmf_br" 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:43.959 Cannot find device "nvmf_init_if" 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:43.959 Cannot find device "nvmf_init_if2" 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:43.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:43.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:43.959 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:44.219 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:44.219 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:10:44.219 00:10:44.219 --- 10.0.0.3 ping statistics --- 00:10:44.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.219 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:44.219 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:44.219 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:10:44.219 00:10:44.219 --- 10.0.0.4 ping statistics --- 00:10:44.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.219 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:44.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:10:44.219 00:10:44.219 --- 10.0.0.1 ping statistics --- 00:10:44.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.219 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:44.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:10:44.219 00:10:44.219 --- 10.0.0.2 ping statistics --- 00:10:44.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.219 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=79050 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 79050 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 79050 ']' 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.219 16:05:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.219 [2024-11-19 16:05:50.884478] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:10:44.219 [2024-11-19 16:05:50.884735] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.479 [2024-11-19 16:05:51.034925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.479 [2024-11-19 16:05:51.058915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.479 [2024-11-19 16:05:51.059206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.479 [2024-11-19 16:05:51.059426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.479 [2024-11-19 16:05:51.059625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.479 [2024-11-19 16:05:51.059682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.479 [2024-11-19 16:05:51.060682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.479 [2024-11-19 16:05:51.060916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.479 [2024-11-19 16:05:51.060773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.479 [2024-11-19 16:05:51.060908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.479 [2024-11-19 16:05:51.093371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:45.412 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.412 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:45.412 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.412 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.412 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.412 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.412 16:05:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:45.670 [2024-11-19 16:05:52.137086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.670 16:05:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:45.928 16:05:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:45.928 16:05:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.186 16:05:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:46.186 16:05:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.443 16:05:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:46.443 16:05:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:46.701 16:05:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:46.701 16:05:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:46.958 16:05:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.217 16:05:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:47.217 16:05:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.507 16:05:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:47.507 16:05:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:47.784 16:05:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:47.784 16:05:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:48.041 16:05:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:48.297 16:05:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:48.297 16:05:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:48.554 16:05:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:48.554 16:05:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:48.812 16:05:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:49.070 [2024-11-19 16:05:55.614166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:49.070 16:05:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:49.328 16:05:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:49.587 16:05:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:49.846 16:05:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:49.846 16:05:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:49.846 16:05:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.846 16:05:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:49.846 16:05:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:49.846 16:05:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:51.746 16:05:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:51.746 16:05:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:51.746 16:05:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:51.746 16:05:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:51.746 16:05:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:51.746 16:05:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:51.746 16:05:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:51.746 [global] 00:10:51.746 thread=1 00:10:51.746 invalidate=1 00:10:51.746 rw=write 00:10:51.746 time_based=1 00:10:51.746 runtime=1 00:10:51.746 ioengine=libaio 00:10:51.746 direct=1 00:10:51.746 bs=4096 00:10:51.746 iodepth=1 00:10:51.746 norandommap=0 00:10:51.746 numjobs=1 00:10:51.746 00:10:51.746 verify_dump=1 00:10:51.746 verify_backlog=512 00:10:51.746 verify_state_save=0 00:10:51.746 do_verify=1 00:10:51.746 verify=crc32c-intel 00:10:51.746 [job0] 00:10:51.746 filename=/dev/nvme0n1 00:10:51.746 [job1] 00:10:51.746 filename=/dev/nvme0n2 00:10:51.746 [job2] 00:10:51.746 filename=/dev/nvme0n3 00:10:51.746 [job3] 00:10:51.746 filename=/dev/nvme0n4 00:10:51.746 Could not set queue depth (nvme0n1) 00:10:51.746 Could not set queue depth (nvme0n2) 00:10:51.746 Could not set queue depth (nvme0n3) 00:10:51.746 Could not set queue depth (nvme0n4) 00:10:52.003 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.004 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.004 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.004 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.004 fio-3.35 00:10:52.004 Starting 4 threads 00:10:53.378 00:10:53.378 job0: (groupid=0, jobs=1): err= 0: pid=79240: Tue Nov 19 16:05:59 2024 00:10:53.378 read: IOPS=2256, BW=9027KiB/s (9244kB/s)(9036KiB/1001msec) 00:10:53.378 slat (nsec): min=9224, max=60570, avg=15196.35, stdev=3911.72 00:10:53.378 clat (usec): min=140, max=667, avg=227.57, stdev=53.38 00:10:53.378 lat (usec): min=155, max=681, avg=242.77, stdev=52.75 00:10:53.378 clat percentiles (usec): 00:10:53.378 | 1.00th=[ 149], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 176], 00:10:53.378 | 30.00th=[ 192], 40.00th=[ 215], 50.00th=[ 225], 60.00th=[ 235], 00:10:53.378 | 70.00th=[ 247], 80.00th=[ 265], 90.00th=[ 302], 95.00th=[ 322], 00:10:53.378 | 99.00th=[ 363], 99.50th=[ 388], 99.90th=[ 545], 99.95th=[ 553], 00:10:53.378 | 99.99th=[ 668] 00:10:53.378 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:53.378 slat (nsec): min=11427, max=86846, avg=20592.07, stdev=6401.79 00:10:53.378 clat (usec): min=98, max=355, avg=152.82, stdev=29.06 00:10:53.378 lat (usec): min=120, max=371, avg=173.41, stdev=26.49 00:10:53.378 clat percentiles (usec): 00:10:53.378 | 1.00th=[ 108], 5.00th=[ 115], 10.00th=[ 120], 20.00th=[ 126], 00:10:53.378 | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 149], 60.00th=[ 159], 00:10:53.378 | 70.00th=[ 169], 80.00th=[ 180], 90.00th=[ 192], 95.00th=[ 204], 00:10:53.378 | 99.00th=[ 227], 99.50th=[ 239], 99.90th=[ 277], 99.95th=[ 277], 00:10:53.378 | 99.99th=[ 355] 00:10:53.378 bw ( KiB/s): min=12288, max=12288, per=33.37%, avg=12288.00, stdev= 0.00, samples=1 00:10:53.378 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:53.378 lat (usec) : 100=0.02%, 250=86.91%, 500=12.99%, 750=0.08% 00:10:53.378 cpu : usr=2.30%, sys=6.60%, ctx=4826, majf=0, minf=5 00:10:53.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.378 issued rwts: total=2259,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.378 job1: (groupid=0, jobs=1): err= 0: pid=79241: Tue Nov 19 16:05:59 2024 00:10:53.378 read: IOPS=2162, BW=8651KiB/s (8859kB/s)(8660KiB/1001msec) 00:10:53.378 slat (nsec): min=9260, max=47712, avg=13920.77, stdev=4388.22 00:10:53.378 clat (usec): min=142, max=6034, avg=242.21, stdev=226.19 00:10:53.378 lat (usec): min=158, max=6051, avg=256.13, stdev=226.48 00:10:53.378 clat percentiles (usec): 00:10:53.378 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 00:10:53.378 | 30.00th=[ 192], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 239], 00:10:53.378 | 70.00th=[ 251], 80.00th=[ 269], 90.00th=[ 310], 95.00th=[ 330], 00:10:53.378 | 99.00th=[ 371], 99.50th=[ 545], 99.90th=[ 4490], 99.95th=[ 4555], 00:10:53.378 | 99.99th=[ 6063] 00:10:53.378 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:53.378 slat (nsec): min=14753, max=87259, avg=21934.65, stdev=6060.52 00:10:53.378 clat (usec): min=98, max=268, avg=149.09, stdev=27.90 00:10:53.378 lat (usec): min=120, max=294, avg=171.03, stdev=26.99 00:10:53.378 clat percentiles (usec): 00:10:53.378 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 123], 00:10:53.378 | 30.00th=[ 129], 40.00th=[ 137], 50.00th=[ 145], 60.00th=[ 155], 00:10:53.378 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 188], 95.00th=[ 198], 00:10:53.378 | 99.00th=[ 217], 99.50th=[ 225], 99.90th=[ 239], 99.95th=[ 247], 00:10:53.378 | 99.99th=[ 269] 00:10:53.378 bw ( KiB/s): min=12272, max=12272, per=33.32%, avg=12272.00, stdev= 0.00, samples=1 00:10:53.378 iops : min= 3068, max= 3068, avg=3068.00, stdev= 0.00, samples=1 00:10:53.378 lat (usec) : 100=0.08%, 250=85.97%, 500=13.65%, 750=0.11%, 1000=0.02% 00:10:53.378 lat (msec) : 2=0.02%, 4=0.08%, 10=0.06% 00:10:53.378 cpu : usr=1.80%, sys=7.00%, ctx=4727, majf=0, minf=15 00:10:53.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.378 issued rwts: total=2165,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.378 job2: (groupid=0, jobs=1): err= 0: pid=79242: Tue Nov 19 16:05:59 2024 00:10:53.378 read: IOPS=1910, BW=7640KiB/s (7824kB/s)(7648KiB/1001msec) 00:10:53.378 slat (nsec): min=9922, max=62545, avg=14727.62, stdev=4623.09 00:10:53.378 clat (usec): min=212, max=372, avg=261.10, stdev=22.87 00:10:53.378 lat (usec): min=225, max=386, avg=275.83, stdev=23.04 00:10:53.378 clat percentiles (usec): 00:10:53.378 | 1.00th=[ 223], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 241], 00:10:53.378 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:10:53.379 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:10:53.379 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 363], 99.95th=[ 371], 00:10:53.379 | 99.99th=[ 371] 00:10:53.379 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:53.379 slat (nsec): min=11481, max=75045, avg=19194.70, stdev=5949.13 00:10:53.379 clat (usec): min=139, max=302, avg=208.53, stdev=24.07 00:10:53.379 lat (usec): min=162, max=329, avg=227.72, stdev=25.03 00:10:53.379 clat percentiles (usec): 00:10:53.379 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 188], 00:10:53.379 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:10:53.379 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 241], 95.00th=[ 253], 00:10:53.379 | 99.00th=[ 277], 99.50th=[ 281], 99.90th=[ 297], 99.95th=[ 297], 00:10:53.379 | 99.99th=[ 302] 00:10:53.379 bw ( KiB/s): min= 8192, max= 8192, per=22.24%, avg=8192.00, stdev= 0.00, samples=1 00:10:53.379 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:53.379 lat (usec) : 250=65.66%, 500=34.34% 00:10:53.379 cpu : usr=1.30%, sys=5.90%, ctx=3960, majf=0, minf=7 00:10:53.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.379 issued rwts: total=1912,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.379 job3: (groupid=0, jobs=1): err= 0: pid=79243: Tue Nov 19 16:05:59 2024 00:10:53.379 read: IOPS=1910, BW=7640KiB/s (7824kB/s)(7648KiB/1001msec) 00:10:53.379 slat (nsec): min=9043, max=59795, avg=12951.17, stdev=4743.67 00:10:53.379 clat (usec): min=166, max=357, avg=263.13, stdev=22.32 00:10:53.379 lat (usec): min=194, max=372, avg=276.08, stdev=23.27 00:10:53.379 clat percentiles (usec): 00:10:53.379 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:10:53.379 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:10:53.379 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:10:53.379 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 359], 99.95th=[ 359], 00:10:53.379 | 99.99th=[ 359] 00:10:53.379 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:53.379 slat (nsec): min=11381, max=98474, avg=22235.92, stdev=7305.08 00:10:53.379 clat (usec): min=130, max=357, avg=205.32, stdev=23.01 00:10:53.379 lat (usec): min=155, max=388, avg=227.56, stdev=24.64 00:10:53.379 clat percentiles (usec): 00:10:53.379 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:10:53.379 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 204], 60.00th=[ 208], 00:10:53.379 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 247], 00:10:53.379 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 297], 99.95th=[ 297], 00:10:53.379 | 99.99th=[ 359] 00:10:53.379 bw ( KiB/s): min= 8192, max= 8192, per=22.24%, avg=8192.00, stdev= 0.00, samples=1 00:10:53.379 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:53.379 lat (usec) : 250=64.70%, 500=35.30% 00:10:53.379 cpu : usr=1.30%, sys=6.00%, ctx=3961, majf=0, minf=9 00:10:53.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.379 issued rwts: total=1912,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.379 00:10:53.379 Run status group 0 (all jobs): 00:10:53.379 READ: bw=32.2MiB/s (33.8MB/s), 7640KiB/s-9027KiB/s (7824kB/s-9244kB/s), io=32.2MiB (33.8MB), run=1001-1001msec 00:10:53.379 WRITE: bw=36.0MiB/s (37.7MB/s), 8184KiB/s-9.99MiB/s (8380kB/s-10.5MB/s), io=36.0MiB (37.7MB), run=1001-1001msec 00:10:53.379 00:10:53.379 Disk stats (read/write): 00:10:53.379 nvme0n1: ios=2098/2139, merge=0/0, ticks=515/321, in_queue=836, util=88.48% 00:10:53.379 nvme0n2: ios=2040/2048, merge=0/0, ticks=503/311, in_queue=814, util=88.31% 00:10:53.379 nvme0n3: ios=1536/1875, merge=0/0, ticks=392/354, in_queue=746, util=89.21% 00:10:53.379 nvme0n4: ios=1536/1873, merge=0/0, ticks=387/391, in_queue=778, util=89.77% 00:10:53.379 16:05:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:53.379 [global] 00:10:53.379 thread=1 00:10:53.379 invalidate=1 00:10:53.379 rw=randwrite 00:10:53.379 time_based=1 00:10:53.379 runtime=1 00:10:53.379 ioengine=libaio 00:10:53.379 direct=1 00:10:53.379 bs=4096 00:10:53.379 iodepth=1 00:10:53.379 norandommap=0 00:10:53.379 numjobs=1 00:10:53.379 00:10:53.379 verify_dump=1 00:10:53.379 verify_backlog=512 00:10:53.379 verify_state_save=0 00:10:53.379 do_verify=1 00:10:53.379 verify=crc32c-intel 00:10:53.379 [job0] 00:10:53.379 filename=/dev/nvme0n1 00:10:53.379 [job1] 00:10:53.379 filename=/dev/nvme0n2 00:10:53.379 [job2] 00:10:53.379 filename=/dev/nvme0n3 00:10:53.379 [job3] 00:10:53.379 filename=/dev/nvme0n4 00:10:53.379 Could not set queue depth (nvme0n1) 00:10:53.379 Could not set queue depth (nvme0n2) 00:10:53.379 Could not set queue depth (nvme0n3) 00:10:53.379 Could not set queue depth (nvme0n4) 00:10:53.379 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.379 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.379 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.379 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:53.379 fio-3.35 00:10:53.379 Starting 4 threads 00:10:54.753 00:10:54.753 job0: (groupid=0, jobs=1): err= 0: pid=79296: Tue Nov 19 16:06:01 2024 00:10:54.753 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:54.753 slat (nsec): min=9598, max=67796, avg=14468.42, stdev=3431.27 00:10:54.753 clat (usec): min=182, max=359, avg=240.01, stdev=20.60 00:10:54.753 lat (usec): min=195, max=373, avg=254.48, stdev=21.04 00:10:54.753 clat percentiles (usec): 00:10:54.753 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 225], 00:10:54.753 | 30.00th=[ 229], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 241], 00:10:54.753 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 277], 00:10:54.753 | 99.00th=[ 322], 99.50th=[ 330], 99.90th=[ 347], 99.95th=[ 347], 00:10:54.753 | 99.99th=[ 359] 00:10:54.753 write: IOPS=2322, BW=9291KiB/s (9514kB/s)(9300KiB/1001msec); 0 zone resets 00:10:54.753 slat (nsec): min=15081, max=99173, avg=21770.26, stdev=4248.61 00:10:54.753 clat (usec): min=102, max=720, avg=180.88, stdev=37.57 00:10:54.753 lat (usec): min=129, max=739, avg=202.65, stdev=38.17 00:10:54.753 clat percentiles (usec): 00:10:54.753 | 1.00th=[ 116], 5.00th=[ 131], 10.00th=[ 157], 20.00th=[ 163], 00:10:54.753 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 184], 00:10:54.753 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 221], 00:10:54.753 | 99.00th=[ 247], 99.50th=[ 465], 99.90th=[ 635], 99.95th=[ 676], 00:10:54.753 | 99.99th=[ 717] 00:10:54.753 bw ( KiB/s): min= 9056, max= 9056, per=25.99%, avg=9056.00, stdev= 0.00, samples=1 00:10:54.753 iops : min= 2264, max= 2264, avg=2264.00, stdev= 0.00, samples=1 00:10:54.753 lat (usec) : 250=88.50%, 500=11.25%, 750=0.25% 00:10:54.753 cpu : usr=2.00%, sys=6.80%, ctx=4374, majf=0, minf=7 00:10:54.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.753 issued rwts: total=2048,2325,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.753 job1: (groupid=0, jobs=1): err= 0: pid=79297: Tue Nov 19 16:06:01 2024 00:10:54.753 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:54.753 slat (nsec): min=9698, max=42194, avg=12597.19, stdev=2478.76 00:10:54.753 clat (usec): min=152, max=948, avg=241.37, stdev=25.20 00:10:54.753 lat (usec): min=165, max=963, avg=253.96, stdev=25.29 00:10:54.753 clat percentiles (usec): 00:10:54.753 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 227], 00:10:54.753 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:10:54.753 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 277], 00:10:54.753 | 99.00th=[ 314], 99.50th=[ 318], 99.90th=[ 343], 99.95th=[ 343], 00:10:54.753 | 99.99th=[ 947] 00:10:54.753 write: IOPS=2296, BW=9187KiB/s (9407kB/s)(9196KiB/1001msec); 0 zone resets 00:10:54.753 slat (nsec): min=12069, max=64068, avg=18016.59, stdev=4208.58 00:10:54.753 clat (usec): min=88, max=5415, avg=188.03, stdev=118.44 00:10:54.753 lat (usec): min=108, max=5450, avg=206.04, stdev=118.82 00:10:54.753 clat percentiles (usec): 00:10:54.753 | 1.00th=[ 114], 5.00th=[ 139], 10.00th=[ 161], 20.00th=[ 169], 00:10:54.754 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 188], 00:10:54.754 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 225], 00:10:54.754 | 99.00th=[ 265], 99.50th=[ 510], 99.90th=[ 1057], 99.95th=[ 1287], 00:10:54.754 | 99.99th=[ 5407] 00:10:54.754 bw ( KiB/s): min= 8792, max= 8792, per=25.23%, avg=8792.00, stdev= 0.00, samples=1 00:10:54.754 iops : min= 2198, max= 2198, avg=2198.00, stdev= 0.00, samples=1 00:10:54.754 lat (usec) : 100=0.12%, 250=86.77%, 500=12.81%, 750=0.21%, 1000=0.02% 00:10:54.754 lat (msec) : 2=0.05%, 10=0.02% 00:10:54.754 cpu : usr=1.40%, sys=5.60%, ctx=4347, majf=0, minf=11 00:10:54.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.754 issued rwts: total=2048,2299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.754 job2: (groupid=0, jobs=1): err= 0: pid=79298: Tue Nov 19 16:06:01 2024 00:10:54.754 read: IOPS=1770, BW=7081KiB/s (7251kB/s)(7088KiB/1001msec) 00:10:54.754 slat (nsec): min=13589, max=46270, avg=16501.84, stdev=2955.72 00:10:54.754 clat (usec): min=155, max=631, avg=279.83, stdev=35.58 00:10:54.754 lat (usec): min=170, max=647, avg=296.33, stdev=36.18 00:10:54.754 clat percentiles (usec): 00:10:54.754 | 1.00th=[ 233], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 262], 00:10:54.754 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 277], 60.00th=[ 281], 00:10:54.754 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 322], 00:10:54.754 | 99.00th=[ 486], 99.50th=[ 498], 99.90th=[ 562], 99.95th=[ 635], 00:10:54.754 | 99.99th=[ 635] 00:10:54.754 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:54.754 slat (usec): min=18, max=115, avg=23.19, stdev= 5.98 00:10:54.754 clat (usec): min=104, max=890, avg=205.18, stdev=33.42 00:10:54.754 lat (usec): min=125, max=998, avg=228.37, stdev=37.16 00:10:54.754 clat percentiles (usec): 00:10:54.754 | 1.00th=[ 122], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 192], 00:10:54.754 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 202], 60.00th=[ 206], 00:10:54.754 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 241], 00:10:54.754 | 99.00th=[ 297], 99.50th=[ 416], 99.90th=[ 506], 99.95th=[ 519], 00:10:54.754 | 99.99th=[ 889] 00:10:54.754 bw ( KiB/s): min= 8192, max= 8192, per=23.51%, avg=8192.00, stdev= 0.00, samples=1 00:10:54.754 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:54.754 lat (usec) : 250=54.79%, 500=44.92%, 750=0.26%, 1000=0.03% 00:10:54.754 cpu : usr=1.30%, sys=6.30%, ctx=3820, majf=0, minf=15 00:10:54.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.754 issued rwts: total=1772,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.754 job3: (groupid=0, jobs=1): err= 0: pid=79299: Tue Nov 19 16:06:01 2024 00:10:54.754 read: IOPS=1755, BW=7021KiB/s (7189kB/s)(7028KiB/1001msec) 00:10:54.754 slat (nsec): min=13421, max=38471, avg=16177.68, stdev=2627.94 00:10:54.754 clat (usec): min=160, max=2091, avg=280.31, stdev=53.21 00:10:54.754 lat (usec): min=174, max=2105, avg=296.49, stdev=53.35 00:10:54.754 clat percentiles (usec): 00:10:54.754 | 1.00th=[ 239], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 262], 00:10:54.754 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:10:54.754 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:10:54.754 | 99.00th=[ 396], 99.50th=[ 416], 99.90th=[ 783], 99.95th=[ 2089], 00:10:54.754 | 99.99th=[ 2089] 00:10:54.754 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:54.754 slat (nsec): min=19262, max=95193, avg=23332.39, stdev=5996.65 00:10:54.754 clat (usec): min=110, max=692, avg=207.24, stdev=40.26 00:10:54.754 lat (usec): min=133, max=768, avg=230.57, stdev=43.99 00:10:54.754 clat percentiles (usec): 00:10:54.754 | 1.00th=[ 129], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:10:54.754 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 206], 00:10:54.754 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 243], 00:10:54.754 | 99.00th=[ 379], 99.50th=[ 494], 99.90th=[ 619], 99.95th=[ 660], 00:10:54.754 | 99.99th=[ 693] 00:10:54.754 bw ( KiB/s): min= 8192, max= 8192, per=23.51%, avg=8192.00, stdev= 0.00, samples=1 00:10:54.754 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:54.754 lat (usec) : 250=54.35%, 500=45.28%, 750=0.32%, 1000=0.03% 00:10:54.754 lat (msec) : 4=0.03% 00:10:54.754 cpu : usr=1.50%, sys=6.00%, ctx=3805, majf=0, minf=11 00:10:54.754 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.754 issued rwts: total=1757,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.754 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.754 00:10:54.754 Run status group 0 (all jobs): 00:10:54.754 READ: bw=29.8MiB/s (31.2MB/s), 7021KiB/s-8184KiB/s (7189kB/s-8380kB/s), io=29.8MiB (31.2MB), run=1001-1001msec 00:10:54.754 WRITE: bw=34.0MiB/s (35.7MB/s), 8184KiB/s-9291KiB/s (8380kB/s-9514kB/s), io=34.1MiB (35.7MB), run=1001-1001msec 00:10:54.754 00:10:54.754 Disk stats (read/write): 00:10:54.754 nvme0n1: ios=1812/2048, merge=0/0, ticks=457/386, in_queue=843, util=89.58% 00:10:54.754 nvme0n2: ios=1783/2048, merge=0/0, ticks=451/353, in_queue=804, util=90.31% 00:10:54.754 nvme0n3: ios=1536/1798, merge=0/0, ticks=440/385, in_queue=825, util=89.45% 00:10:54.754 nvme0n4: ios=1563/1758, merge=0/0, ticks=484/382, in_queue=866, util=90.83% 00:10:54.754 16:06:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:54.754 [global] 00:10:54.754 thread=1 00:10:54.754 invalidate=1 00:10:54.754 rw=write 00:10:54.754 time_based=1 00:10:54.754 runtime=1 00:10:54.754 ioengine=libaio 00:10:54.754 direct=1 00:10:54.754 bs=4096 00:10:54.754 iodepth=128 00:10:54.754 norandommap=0 00:10:54.754 numjobs=1 00:10:54.754 00:10:54.754 verify_dump=1 00:10:54.754 verify_backlog=512 00:10:54.754 verify_state_save=0 00:10:54.754 do_verify=1 00:10:54.754 verify=crc32c-intel 00:10:54.754 [job0] 00:10:54.754 filename=/dev/nvme0n1 00:10:54.754 [job1] 00:10:54.754 filename=/dev/nvme0n2 00:10:54.754 [job2] 00:10:54.754 filename=/dev/nvme0n3 00:10:54.754 [job3] 00:10:54.754 filename=/dev/nvme0n4 00:10:54.754 Could not set queue depth (nvme0n1) 00:10:54.754 Could not set queue depth (nvme0n2) 00:10:54.754 Could not set queue depth (nvme0n3) 00:10:54.754 Could not set queue depth (nvme0n4) 00:10:54.754 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.754 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.754 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.754 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:54.754 fio-3.35 00:10:54.754 Starting 4 threads 00:10:56.130 00:10:56.130 job0: (groupid=0, jobs=1): err= 0: pid=79359: Tue Nov 19 16:06:02 2024 00:10:56.130 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:56.130 slat (usec): min=6, max=6492, avg=194.10, stdev=984.12 00:10:56.130 clat (usec): min=3716, max=27057, avg=24949.02, stdev=3220.25 00:10:56.130 lat (usec): min=3733, max=27086, avg=25143.12, stdev=3081.82 00:10:56.130 clat percentiles (usec): 00:10:56.130 | 1.00th=[ 4359], 5.00th=[20055], 10.00th=[24249], 20.00th=[24773], 00:10:56.130 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:10:56.130 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26608], 95.00th=[26608], 00:10:56.130 | 99.00th=[26870], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:10:56.130 | 99.99th=[27132] 00:10:56.130 write: IOPS=2558, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:56.130 slat (usec): min=20, max=7158, avg=186.67, stdev=872.31 00:10:56.130 clat (usec): min=247, max=26395, avg=24133.80, stdev=1166.04 00:10:56.130 lat (usec): min=3709, max=26492, avg=24320.47, stdev=738.39 00:10:56.130 clat percentiles (usec): 00:10:56.130 | 1.00th=[19006], 5.00th=[23200], 10.00th=[23725], 20.00th=[23725], 00:10:56.130 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:10:56.130 | 70.00th=[24511], 80.00th=[24773], 90.00th=[25035], 95.00th=[25560], 00:10:56.130 | 99.00th=[26346], 99.50th=[26346], 99.90th=[26346], 99.95th=[26346], 00:10:56.130 | 99.99th=[26346] 00:10:56.130 bw ( KiB/s): min= 8952, max=11528, per=16.26%, avg=10240.00, stdev=1821.51, samples=2 00:10:56.130 iops : min= 2238, max= 2882, avg=2560.00, stdev=455.38, samples=2 00:10:56.130 lat (usec) : 250=0.02% 00:10:56.130 lat (msec) : 4=0.25%, 10=0.37%, 20=3.30%, 50=96.06% 00:10:56.130 cpu : usr=3.00%, sys=8.70%, ctx=191, majf=0, minf=1 00:10:56.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:56.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.130 issued rwts: total=2560,2561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.130 job1: (groupid=0, jobs=1): err= 0: pid=79360: Tue Nov 19 16:06:02 2024 00:10:56.130 read: IOPS=5472, BW=21.4MiB/s (22.4MB/s)(21.4MiB/1002msec) 00:10:56.130 slat (usec): min=5, max=4171, avg=88.71, stdev=381.59 00:10:56.130 clat (usec): min=747, max=16762, avg=11768.41, stdev=1210.68 00:10:56.130 lat (usec): min=2041, max=16800, avg=11857.12, stdev=1217.91 00:10:56.130 clat percentiles (usec): 00:10:56.130 | 1.00th=[ 6325], 5.00th=[10290], 10.00th=[10683], 20.00th=[11469], 00:10:56.130 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:10:56.130 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12780], 95.00th=[13304], 00:10:56.130 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15664], 99.95th=[15795], 00:10:56.130 | 99.99th=[16712] 00:10:56.130 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:56.130 slat (usec): min=12, max=4596, avg=83.40, stdev=461.12 00:10:56.130 clat (usec): min=5863, max=15948, avg=11029.98, stdev=979.45 00:10:56.130 lat (usec): min=5910, max=15994, avg=11113.38, stdev=1069.53 00:10:56.130 clat percentiles (usec): 00:10:56.130 | 1.00th=[ 8094], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10421], 00:10:56.130 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:10:56.130 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11863], 95.00th=[12780], 00:10:56.130 | 99.00th=[14353], 99.50th=[15008], 99.90th=[15664], 99.95th=[15664], 00:10:56.130 | 99.99th=[15926] 00:10:56.130 bw ( KiB/s): min=21856, max=23246, per=35.81%, avg=22551.00, stdev=982.88, samples=2 00:10:56.130 iops : min= 5464, max= 5811, avg=5637.50, stdev=245.37, samples=2 00:10:56.130 lat (usec) : 750=0.01% 00:10:56.130 lat (msec) : 4=0.19%, 10=5.02%, 20=94.78% 00:10:56.130 cpu : usr=5.00%, sys=15.38%, ctx=341, majf=0, minf=1 00:10:56.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:56.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.130 issued rwts: total=5483,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.130 job2: (groupid=0, jobs=1): err= 0: pid=79361: Tue Nov 19 16:06:02 2024 00:10:56.130 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:10:56.130 slat (usec): min=6, max=3111, avg=100.28, stdev=443.45 00:10:56.130 clat (usec): min=10131, max=14690, avg=13492.61, stdev=561.14 00:10:56.130 lat (usec): min=11044, max=15839, avg=13592.89, stdev=361.44 00:10:56.130 clat percentiles (usec): 00:10:56.130 | 1.00th=[10945], 5.00th=[12911], 10.00th=[13042], 20.00th=[13304], 00:10:56.130 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566], 00:10:56.130 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14222], 00:10:56.130 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14615], 99.95th=[14746], 00:10:56.130 | 99.99th=[14746] 00:10:56.130 write: IOPS=5006, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1002msec); 0 zone resets 00:10:56.130 slat (usec): min=9, max=4057, avg=99.23, stdev=415.64 00:10:56.130 clat (usec): min=1455, max=14512, avg=12829.74, stdev=1218.22 00:10:56.130 lat (usec): min=1476, max=15677, avg=12928.97, stdev=1163.93 00:10:56.130 clat percentiles (usec): 00:10:56.130 | 1.00th=[ 5276], 5.00th=[11731], 10.00th=[12387], 20.00th=[12649], 00:10:56.130 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13042], 00:10:56.130 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13829], 00:10:56.130 | 99.00th=[14353], 99.50th=[14484], 99.90th=[14484], 99.95th=[14484], 00:10:56.130 | 99.99th=[14484] 00:10:56.130 bw ( KiB/s): min=18640, max=20480, per=31.06%, avg=19560.00, stdev=1301.08, samples=2 00:10:56.130 iops : min= 4660, max= 5120, avg=4890.00, stdev=325.27, samples=2 00:10:56.130 lat (msec) : 2=0.23%, 4=0.03%, 10=0.69%, 20=99.05% 00:10:56.130 cpu : usr=3.90%, sys=14.99%, ctx=373, majf=0, minf=1 00:10:56.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:56.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.130 issued rwts: total=4608,5017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.130 job3: (groupid=0, jobs=1): err= 0: pid=79362: Tue Nov 19 16:06:02 2024 00:10:56.130 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:10:56.130 slat (usec): min=7, max=8605, avg=199.25, stdev=1015.66 00:10:56.130 clat (usec): min=6969, max=29214, avg=24845.10, stdev=2978.21 00:10:56.130 lat (usec): min=6986, max=29231, avg=25044.34, stdev=2836.63 00:10:56.130 clat percentiles (usec): 00:10:56.131 | 1.00th=[ 7504], 5.00th=[20579], 10.00th=[21890], 20.00th=[23725], 00:10:56.131 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:10:56.131 | 70.00th=[26346], 80.00th=[26608], 90.00th=[26870], 95.00th=[27395], 00:10:56.131 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29230], 99.95th=[29230], 00:10:56.131 | 99.99th=[29230] 00:10:56.131 write: IOPS=2558, BW=10.00MiB/s (10.5MB/s)(10.0MiB/1002msec); 0 zone resets 00:10:56.131 slat (usec): min=11, max=6031, avg=181.69, stdev=859.77 00:10:56.131 clat (usec): min=761, max=28492, avg=24420.52, stdev=1951.35 00:10:56.131 lat (usec): min=790, max=28540, avg=24602.21, stdev=1711.55 00:10:56.131 clat percentiles (usec): 00:10:56.131 | 1.00th=[19006], 5.00th=[21103], 10.00th=[23200], 20.00th=[23725], 00:10:56.131 | 30.00th=[23987], 40.00th=[23987], 50.00th=[24249], 60.00th=[24511], 00:10:56.131 | 70.00th=[24773], 80.00th=[25560], 90.00th=[27132], 95.00th=[27395], 00:10:56.131 | 99.00th=[28181], 99.50th=[28181], 99.90th=[28443], 99.95th=[28443], 00:10:56.131 | 99.99th=[28443] 00:10:56.131 bw ( KiB/s): min=11528, max=11528, per=18.31%, avg=11528.00, stdev= 0.00, samples=1 00:10:56.131 iops : min= 2882, max= 2882, avg=2882.00, stdev= 0.00, samples=1 00:10:56.131 lat (usec) : 1000=0.08% 00:10:56.131 lat (msec) : 10=0.62%, 20=2.48%, 50=96.82% 00:10:56.131 cpu : usr=2.60%, sys=8.69%, ctx=165, majf=0, minf=6 00:10:56.131 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:56.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.131 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:56.131 issued rwts: total=2560,2564,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.131 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:56.131 00:10:56.131 Run status group 0 (all jobs): 00:10:56.131 READ: bw=59.3MiB/s (62.2MB/s), 9.98MiB/s-21.4MiB/s (10.5MB/s-22.4MB/s), io=59.4MiB (62.3MB), run=1001-1002msec 00:10:56.131 WRITE: bw=61.5MiB/s (64.5MB/s), 9.99MiB/s-22.0MiB/s (10.5MB/s-23.0MB/s), io=61.6MiB (64.6MB), run=1001-1002msec 00:10:56.131 00:10:56.131 Disk stats (read/write): 00:10:56.131 nvme0n1: ios=2098/2336, merge=0/0, ticks=12335/12822, in_queue=25157, util=88.77% 00:10:56.131 nvme0n2: ios=4651/4920, merge=0/0, ticks=25953/22073, in_queue=48026, util=88.75% 00:10:56.131 nvme0n3: ios=4096/4160, merge=0/0, ticks=12281/11628, in_queue=23909, util=89.33% 00:10:56.131 nvme0n4: ios=2048/2368, merge=0/0, ticks=12567/12694, in_queue=25261, util=89.79% 00:10:56.131 16:06:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:56.131 [global] 00:10:56.131 thread=1 00:10:56.131 invalidate=1 00:10:56.131 rw=randwrite 00:10:56.131 time_based=1 00:10:56.131 runtime=1 00:10:56.131 ioengine=libaio 00:10:56.131 direct=1 00:10:56.131 bs=4096 00:10:56.131 iodepth=128 00:10:56.131 norandommap=0 00:10:56.131 numjobs=1 00:10:56.131 00:10:56.131 verify_dump=1 00:10:56.131 verify_backlog=512 00:10:56.131 verify_state_save=0 00:10:56.131 do_verify=1 00:10:56.131 verify=crc32c-intel 00:10:56.131 [job0] 00:10:56.131 filename=/dev/nvme0n1 00:10:56.131 [job1] 00:10:56.131 filename=/dev/nvme0n2 00:10:56.131 [job2] 00:10:56.131 filename=/dev/nvme0n3 00:10:56.131 [job3] 00:10:56.131 filename=/dev/nvme0n4 00:10:56.131 Could not set queue depth (nvme0n1) 00:10:56.131 Could not set queue depth (nvme0n2) 00:10:56.131 Could not set queue depth (nvme0n3) 00:10:56.131 Could not set queue depth (nvme0n4) 00:10:56.131 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.131 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.131 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.131 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:56.131 fio-3.35 00:10:56.131 Starting 4 threads 00:10:57.506 00:10:57.506 job0: (groupid=0, jobs=1): err= 0: pid=79421: Tue Nov 19 16:06:03 2024 00:10:57.506 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:10:57.506 slat (usec): min=13, max=6074, avg=188.35, stdev=945.87 00:10:57.506 clat (usec): min=18325, max=26173, avg=24742.38, stdev=1070.35 00:10:57.506 lat (usec): min=23181, max=26190, avg=24930.72, stdev=505.69 00:10:57.506 clat percentiles (usec): 00:10:57.506 | 1.00th=[19268], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:10:57.506 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25035], 00:10:57.506 | 70.00th=[25035], 80.00th=[25297], 90.00th=[25560], 95.00th=[25822], 00:10:57.506 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:10:57.506 | 99.99th=[26084] 00:10:57.507 write: IOPS=2646, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1004msec); 0 zone resets 00:10:57.507 slat (usec): min=15, max=5704, avg=186.29, stdev=875.39 00:10:57.507 clat (usec): min=291, max=25516, avg=23628.15, stdev=2658.52 00:10:57.507 lat (usec): min=5382, max=25548, avg=23814.44, stdev=2509.00 00:10:57.507 clat percentiles (usec): 00:10:57.507 | 1.00th=[ 6194], 5.00th=[19006], 10.00th=[23462], 20.00th=[23725], 00:10:57.507 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:10:57.507 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25297], 00:10:57.507 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25560], 99.95th=[25560], 00:10:57.507 | 99.99th=[25560] 00:10:57.507 bw ( KiB/s): min= 8696, max=11760, per=16.24%, avg=10228.00, stdev=2166.58, samples=2 00:10:57.507 iops : min= 2174, max= 2940, avg=2557.00, stdev=541.64, samples=2 00:10:57.507 lat (usec) : 500=0.02% 00:10:57.507 lat (msec) : 10=0.61%, 20=4.01%, 50=95.36% 00:10:57.507 cpu : usr=2.69%, sys=9.07%, ctx=194, majf=0, minf=15 00:10:57.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:57.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.507 issued rwts: total=2560,2657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.507 job1: (groupid=0, jobs=1): err= 0: pid=79422: Tue Nov 19 16:06:03 2024 00:10:57.507 read: IOPS=5311, BW=20.7MiB/s (21.8MB/s)(20.8MiB/1002msec) 00:10:57.507 slat (usec): min=10, max=4396, avg=90.49, stdev=388.48 00:10:57.507 clat (usec): min=1628, max=16593, avg=11987.86, stdev=1187.18 00:10:57.507 lat (usec): min=1643, max=16911, avg=12078.35, stdev=1195.76 00:10:57.507 clat percentiles (usec): 00:10:57.507 | 1.00th=[ 7832], 5.00th=[10421], 10.00th=[10945], 20.00th=[11731], 00:10:57.507 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:10:57.507 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12780], 95.00th=[13435], 00:10:57.507 | 99.00th=[15008], 99.50th=[15401], 99.90th=[15533], 99.95th=[16057], 00:10:57.507 | 99.99th=[16581] 00:10:57.507 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:10:57.507 slat (usec): min=11, max=4697, avg=84.15, stdev=465.90 00:10:57.507 clat (usec): min=6165, max=16289, avg=11172.66, stdev=1042.54 00:10:57.507 lat (usec): min=6194, max=16330, avg=11256.81, stdev=1130.51 00:10:57.507 clat percentiles (usec): 00:10:57.507 | 1.00th=[ 8029], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10552], 00:10:57.507 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:10:57.507 | 70.00th=[11469], 80.00th=[11600], 90.00th=[11994], 95.00th=[12649], 00:10:57.507 | 99.00th=[15008], 99.50th=[15270], 99.90th=[15926], 99.95th=[16057], 00:10:57.507 | 99.99th=[16319] 00:10:57.507 bw ( KiB/s): min=22192, max=22864, per=35.76%, avg=22528.00, stdev=475.18, samples=2 00:10:57.507 iops : min= 5548, max= 5716, avg=5632.00, stdev=118.79, samples=2 00:10:57.507 lat (msec) : 2=0.09%, 4=0.19%, 10=4.81%, 20=94.91% 00:10:57.507 cpu : usr=5.39%, sys=14.99%, ctx=335, majf=0, minf=9 00:10:57.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:57.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.507 issued rwts: total=5322,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.507 job2: (groupid=0, jobs=1): err= 0: pid=79423: Tue Nov 19 16:06:03 2024 00:10:57.507 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:10:57.507 slat (usec): min=12, max=6063, avg=188.44, stdev=947.23 00:10:57.507 clat (usec): min=18116, max=26124, avg=24734.69, stdev=1084.04 00:10:57.507 lat (usec): min=23146, max=26141, avg=24923.14, stdev=531.43 00:10:57.507 clat percentiles (usec): 00:10:57.507 | 1.00th=[19268], 5.00th=[23462], 10.00th=[23987], 20.00th=[24511], 00:10:57.507 | 30.00th=[24773], 40.00th=[24773], 50.00th=[25035], 60.00th=[25035], 00:10:57.507 | 70.00th=[25035], 80.00th=[25297], 90.00th=[25560], 95.00th=[25822], 00:10:57.507 | 99.00th=[26084], 99.50th=[26084], 99.90th=[26084], 99.95th=[26084], 00:10:57.507 | 99.99th=[26084] 00:10:57.507 write: IOPS=2649, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1003msec); 0 zone resets 00:10:57.507 slat (usec): min=11, max=5769, avg=186.34, stdev=878.97 00:10:57.507 clat (usec): min=269, max=25772, avg=23637.26, stdev=2663.08 00:10:57.507 lat (usec): min=5264, max=25798, avg=23823.60, stdev=2513.03 00:10:57.507 clat percentiles (usec): 00:10:57.507 | 1.00th=[ 6194], 5.00th=[19006], 10.00th=[23462], 20.00th=[23725], 00:10:57.507 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24249], 60.00th=[24249], 00:10:57.507 | 70.00th=[24511], 80.00th=[24511], 90.00th=[25035], 95.00th=[25297], 00:10:57.507 | 99.00th=[25560], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:10:57.507 | 99.99th=[25822] 00:10:57.507 bw ( KiB/s): min= 8696, max=11784, per=16.26%, avg=10240.00, stdev=2183.55, samples=2 00:10:57.507 iops : min= 2174, max= 2946, avg=2560.00, stdev=545.89, samples=2 00:10:57.507 lat (usec) : 500=0.02% 00:10:57.507 lat (msec) : 10=0.61%, 20=4.03%, 50=95.34% 00:10:57.507 cpu : usr=2.10%, sys=9.48%, ctx=166, majf=0, minf=11 00:10:57.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:57.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.507 issued rwts: total=2560,2657,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.507 job3: (groupid=0, jobs=1): err= 0: pid=79424: Tue Nov 19 16:06:03 2024 00:10:57.507 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:10:57.507 slat (usec): min=10, max=3348, avg=102.18, stdev=476.07 00:10:57.507 clat (usec): min=10413, max=15226, avg=13714.06, stdev=586.26 00:10:57.507 lat (usec): min=12870, max=15265, avg=13816.23, stdev=353.37 00:10:57.507 clat percentiles (usec): 00:10:57.507 | 1.00th=[10945], 5.00th=[13173], 10.00th=[13304], 20.00th=[13435], 00:10:57.507 | 30.00th=[13566], 40.00th=[13698], 50.00th=[13698], 60.00th=[13829], 00:10:57.507 | 70.00th=[13960], 80.00th=[14091], 90.00th=[14222], 95.00th=[14484], 00:10:57.507 | 99.00th=[14746], 99.50th=[14877], 99.90th=[15139], 99.95th=[15139], 00:10:57.507 | 99.99th=[15270] 00:10:57.507 write: IOPS=4855, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1002msec); 0 zone resets 00:10:57.507 slat (usec): min=13, max=4150, avg=100.49, stdev=422.75 00:10:57.507 clat (usec): min=354, max=15902, avg=13010.05, stdev=1199.65 00:10:57.507 lat (usec): min=2988, max=15927, avg=13110.55, stdev=1122.32 00:10:57.507 clat percentiles (usec): 00:10:57.507 | 1.00th=[ 6652], 5.00th=[11994], 10.00th=[12518], 20.00th=[12780], 00:10:57.507 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:10:57.507 | 70.00th=[13435], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:10:57.507 | 99.00th=[15795], 99.50th=[15795], 99.90th=[15926], 99.95th=[15926], 00:10:57.507 | 99.99th=[15926] 00:10:57.507 bw ( KiB/s): min=17424, max=20480, per=30.09%, avg=18952.00, stdev=2160.92, samples=2 00:10:57.507 iops : min= 4356, max= 5120, avg=4738.00, stdev=540.23, samples=2 00:10:57.507 lat (usec) : 500=0.01% 00:10:57.507 lat (msec) : 4=0.34%, 10=0.50%, 20=99.16% 00:10:57.507 cpu : usr=5.19%, sys=13.59%, ctx=301, majf=0, minf=18 00:10:57.507 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:57.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:57.507 issued rwts: total=4608,4865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.507 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:57.507 00:10:57.507 Run status group 0 (all jobs): 00:10:57.507 READ: bw=58.6MiB/s (61.4MB/s), 9.96MiB/s-20.7MiB/s (10.4MB/s-21.8MB/s), io=58.8MiB (61.6MB), run=1002-1004msec 00:10:57.507 WRITE: bw=61.5MiB/s (64.5MB/s), 10.3MiB/s-22.0MiB/s (10.8MB/s-23.0MB/s), io=61.8MiB (64.8MB), run=1002-1004msec 00:10:57.507 00:10:57.507 Disk stats (read/write): 00:10:57.507 nvme0n1: ios=2098/2432, merge=0/0, ticks=11727/13275, in_queue=25002, util=88.57% 00:10:57.507 nvme0n2: ios=4635/4794, merge=0/0, ticks=26371/21702, in_queue=48073, util=88.54% 00:10:57.507 nvme0n3: ios=2048/2432, merge=0/0, ticks=11747/13314, in_queue=25061, util=89.26% 00:10:57.507 nvme0n4: ios=4032/4096, merge=0/0, ticks=12372/11495, in_queue=23867, util=89.71% 00:10:57.507 16:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:57.507 16:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=79438 00:10:57.507 16:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:57.507 16:06:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:57.507 [global] 00:10:57.507 thread=1 00:10:57.507 invalidate=1 00:10:57.507 rw=read 00:10:57.507 time_based=1 00:10:57.507 runtime=10 00:10:57.507 ioengine=libaio 00:10:57.507 direct=1 00:10:57.507 bs=4096 00:10:57.507 iodepth=1 00:10:57.507 norandommap=1 00:10:57.507 numjobs=1 00:10:57.507 00:10:57.507 [job0] 00:10:57.507 filename=/dev/nvme0n1 00:10:57.507 [job1] 00:10:57.507 filename=/dev/nvme0n2 00:10:57.507 [job2] 00:10:57.507 filename=/dev/nvme0n3 00:10:57.507 [job3] 00:10:57.507 filename=/dev/nvme0n4 00:10:57.507 Could not set queue depth (nvme0n1) 00:10:57.507 Could not set queue depth (nvme0n2) 00:10:57.507 Could not set queue depth (nvme0n3) 00:10:57.507 Could not set queue depth (nvme0n4) 00:10:57.507 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.507 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.507 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.507 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:57.507 fio-3.35 00:10:57.507 Starting 4 threads 00:11:00.788 16:06:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:00.788 fio: pid=79481, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:00.788 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=39145472, buflen=4096 00:11:00.788 16:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:00.788 fio: pid=79480, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:00.788 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45264896, buflen=4096 00:11:01.046 16:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.046 16:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:01.304 fio: pid=79478, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:01.304 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=8679424, buflen=4096 00:11:01.304 16:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.304 16:06:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:01.563 fio: pid=79479, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:01.563 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=18317312, buflen=4096 00:11:01.563 16:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.563 16:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:01.563 00:11:01.563 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79478: Tue Nov 19 16:06:08 2024 00:11:01.563 read: IOPS=5212, BW=20.4MiB/s (21.3MB/s)(72.3MiB/3550msec) 00:11:01.563 slat (usec): min=9, max=11741, avg=16.81, stdev=133.99 00:11:01.563 clat (usec): min=124, max=1813, avg=173.73, stdev=32.47 00:11:01.563 lat (usec): min=137, max=12283, avg=190.54, stdev=140.05 00:11:01.563 clat percentiles (usec): 00:11:01.563 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:11:01.563 | 30.00th=[ 161], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:11:01.563 | 70.00th=[ 178], 80.00th=[ 186], 90.00th=[ 208], 95.00th=[ 225], 00:11:01.563 | 99.00th=[ 245], 99.50th=[ 255], 99.90th=[ 416], 99.95th=[ 562], 00:11:01.563 | 99.99th=[ 1729] 00:11:01.563 bw ( KiB/s): min=20856, max=22144, per=34.50%, avg=21621.33, stdev=483.52, samples=6 00:11:01.563 iops : min= 5214, max= 5536, avg=5405.33, stdev=120.88, samples=6 00:11:01.563 lat (usec) : 250=99.31%, 500=0.61%, 750=0.04%, 1000=0.01% 00:11:01.563 lat (msec) : 2=0.02% 00:11:01.563 cpu : usr=1.80%, sys=6.71%, ctx=18517, majf=0, minf=1 00:11:01.563 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.563 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.563 issued rwts: total=18504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.563 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.563 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79479: Tue Nov 19 16:06:08 2024 00:11:01.563 read: IOPS=5448, BW=21.3MiB/s (22.3MB/s)(81.5MiB/3828msec) 00:11:01.563 slat (usec): min=9, max=8759, avg=16.25, stdev=127.48 00:11:01.563 clat (usec): min=123, max=2136, avg=166.03, stdev=30.51 00:11:01.563 lat (usec): min=135, max=8985, avg=182.28, stdev=132.40 00:11:01.563 clat percentiles (usec): 00:11:01.563 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:11:01.563 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:11:01.563 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 200], 95.00th=[ 219], 00:11:01.563 | 99.00th=[ 239], 99.50th=[ 249], 99.90th=[ 310], 99.95th=[ 494], 00:11:01.563 | 99.99th=[ 1106] 00:11:01.563 bw ( KiB/s): min=17003, max=23040, per=34.86%, avg=21842.71, stdev=2185.39, samples=7 00:11:01.563 iops : min= 4250, max= 5760, avg=5460.57, stdev=546.62, samples=7 00:11:01.563 lat (usec) : 250=99.54%, 500=0.40%, 750=0.03%, 1000=0.01% 00:11:01.563 lat (msec) : 2=0.01%, 4=0.01% 00:11:01.563 cpu : usr=1.38%, sys=7.16%, ctx=20885, majf=0, minf=2 00:11:01.563 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.563 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.563 issued rwts: total=20857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.563 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.564 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79480: Tue Nov 19 16:06:08 2024 00:11:01.564 read: IOPS=3373, BW=13.2MiB/s (13.8MB/s)(43.2MiB/3276msec) 00:11:01.564 slat (usec): min=13, max=9566, avg=18.20, stdev=123.47 00:11:01.564 clat (usec): min=147, max=1886, avg=276.57, stdev=54.63 00:11:01.564 lat (usec): min=160, max=9787, avg=294.77, stdev=134.37 00:11:01.564 clat percentiles (usec): 00:11:01.564 | 1.00th=[ 159], 5.00th=[ 167], 10.00th=[ 182], 20.00th=[ 269], 00:11:01.564 | 30.00th=[ 277], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:11:01.564 | 70.00th=[ 293], 80.00th=[ 302], 90.00th=[ 314], 95.00th=[ 322], 00:11:01.564 | 99.00th=[ 396], 99.50th=[ 449], 99.90th=[ 676], 99.95th=[ 807], 00:11:01.564 | 99.99th=[ 1696] 00:11:01.564 bw ( KiB/s): min=12768, max=13432, per=20.76%, avg=13006.67, stdev=241.62, samples=6 00:11:01.564 iops : min= 3192, max= 3358, avg=3251.67, stdev=60.40, samples=6 00:11:01.564 lat (usec) : 250=14.09%, 500=85.54%, 750=0.30%, 1000=0.04% 00:11:01.564 lat (msec) : 2=0.03% 00:11:01.564 cpu : usr=1.25%, sys=4.49%, ctx=11054, majf=0, minf=2 00:11:01.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.564 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.564 issued rwts: total=11052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.564 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=79481: Tue Nov 19 16:06:08 2024 00:11:01.564 read: IOPS=3224, BW=12.6MiB/s (13.2MB/s)(37.3MiB/2964msec) 00:11:01.564 slat (usec): min=13, max=138, avg=16.38, stdev= 2.85 00:11:01.564 clat (usec): min=171, max=2595, avg=292.00, stdev=46.13 00:11:01.564 lat (usec): min=190, max=2622, avg=308.38, stdev=46.28 00:11:01.564 clat percentiles (usec): 00:11:01.564 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:11:01.564 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:11:01.564 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 322], 00:11:01.564 | 99.00th=[ 347], 99.50th=[ 449], 99.90th=[ 701], 99.95th=[ 873], 00:11:01.564 | 99.99th=[ 2606] 00:11:01.564 bw ( KiB/s): min=12768, max=13080, per=20.61%, avg=12912.00, stdev=141.42, samples=5 00:11:01.564 iops : min= 3192, max= 3270, avg=3228.00, stdev=35.36, samples=5 00:11:01.564 lat (usec) : 250=0.63%, 500=99.01%, 750=0.29%, 1000=0.03% 00:11:01.564 lat (msec) : 2=0.01%, 4=0.02% 00:11:01.564 cpu : usr=1.01%, sys=4.76%, ctx=9560, majf=0, minf=1 00:11:01.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:01.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.564 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:01.564 issued rwts: total=9558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:01.564 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:01.564 00:11:01.564 Run status group 0 (all jobs): 00:11:01.564 READ: bw=61.2MiB/s (64.2MB/s), 12.6MiB/s-21.3MiB/s (13.2MB/s-22.3MB/s), io=234MiB (246MB), run=2964-3828msec 00:11:01.564 00:11:01.564 Disk stats (read/write): 00:11:01.564 nvme0n1: ios=17726/0, merge=0/0, ticks=3039/0, in_queue=3039, util=95.46% 00:11:01.564 nvme0n2: ios=19614/0, merge=0/0, ticks=3317/0, in_queue=3317, util=95.70% 00:11:01.564 nvme0n3: ios=10263/0, merge=0/0, ticks=2955/0, in_queue=2955, util=96.31% 00:11:01.564 nvme0n4: ios=9269/0, merge=0/0, ticks=2738/0, in_queue=2738, util=96.70% 00:11:01.823 16:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:01.823 16:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:02.082 16:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.082 16:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:02.340 16:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.340 16:06:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:02.598 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:02.598 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 79438 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.856 nvmf hotplug test: fio failed as expected 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:02.856 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:03.114 rmmod nvme_tcp 00:11:03.114 rmmod nvme_fabrics 00:11:03.114 rmmod nvme_keyring 00:11:03.114 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 79050 ']' 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 79050 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 79050 ']' 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 79050 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79050 00:11:03.372 killing process with pid 79050 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79050' 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 79050 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 79050 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:03.372 16:06:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:03.372 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:03.372 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:03.372 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.372 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:03.372 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:03.372 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:03.372 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:03.372 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:03.631 ************************************ 00:11:03.631 END TEST nvmf_fio_target 00:11:03.631 ************************************ 00:11:03.631 00:11:03.631 real 0m20.048s 00:11:03.631 user 1m15.228s 00:11:03.631 sys 0m10.488s 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:03.631 ************************************ 00:11:03.631 START TEST nvmf_bdevio 00:11:03.631 ************************************ 00:11:03.631 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:03.898 * Looking for test storage... 00:11:03.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:03.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.898 --rc genhtml_branch_coverage=1 00:11:03.898 --rc genhtml_function_coverage=1 00:11:03.898 --rc genhtml_legend=1 00:11:03.898 --rc geninfo_all_blocks=1 00:11:03.898 --rc geninfo_unexecuted_blocks=1 00:11:03.898 00:11:03.898 ' 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:03.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.898 --rc genhtml_branch_coverage=1 00:11:03.898 --rc genhtml_function_coverage=1 00:11:03.898 --rc genhtml_legend=1 00:11:03.898 --rc geninfo_all_blocks=1 00:11:03.898 --rc geninfo_unexecuted_blocks=1 00:11:03.898 00:11:03.898 ' 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:03.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.898 --rc genhtml_branch_coverage=1 00:11:03.898 --rc genhtml_function_coverage=1 00:11:03.898 --rc genhtml_legend=1 00:11:03.898 --rc geninfo_all_blocks=1 00:11:03.898 --rc geninfo_unexecuted_blocks=1 00:11:03.898 00:11:03.898 ' 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:03.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.898 --rc genhtml_branch_coverage=1 00:11:03.898 --rc genhtml_function_coverage=1 00:11:03.898 --rc genhtml_legend=1 00:11:03.898 --rc geninfo_all_blocks=1 00:11:03.898 --rc geninfo_unexecuted_blocks=1 00:11:03.898 00:11:03.898 ' 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:03.898 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:03.899 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:03.899 Cannot find device "nvmf_init_br" 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:03.899 Cannot find device "nvmf_init_br2" 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:03.899 Cannot find device "nvmf_tgt_br" 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:03.899 Cannot find device "nvmf_tgt_br2" 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:03.899 Cannot find device "nvmf_init_br" 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:03.899 Cannot find device "nvmf_init_br2" 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:03.899 Cannot find device "nvmf_tgt_br" 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:03.899 Cannot find device "nvmf_tgt_br2" 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:03.899 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:04.209 Cannot find device "nvmf_br" 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:04.209 Cannot find device "nvmf_init_if" 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:04.209 Cannot find device "nvmf_init_if2" 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:04.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:04.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:04.209 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:04.209 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:04.209 00:11:04.209 --- 10.0.0.3 ping statistics --- 00:11:04.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.209 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:04.209 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:04.209 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:11:04.209 00:11:04.209 --- 10.0.0.4 ping statistics --- 00:11:04.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.209 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:04.209 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:04.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:11:04.476 00:11:04.476 --- 10.0.0.1 ping statistics --- 00:11:04.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.476 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:04.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:11:04.476 00:11:04.476 --- 10.0.0.2 ping statistics --- 00:11:04.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.476 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=79805 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 79805 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 79805 ']' 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.476 16:06:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 [2024-11-19 16:06:11.010958] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:11:04.476 [2024-11-19 16:06:11.011085] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.476 [2024-11-19 16:06:11.162523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.476 [2024-11-19 16:06:11.186963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.476 [2024-11-19 16:06:11.187036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.476 [2024-11-19 16:06:11.187062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.476 [2024-11-19 16:06:11.187072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.476 [2024-11-19 16:06:11.187082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.735 [2024-11-19 16:06:11.188079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:04.735 [2024-11-19 16:06:11.188288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:04.735 [2024-11-19 16:06:11.188379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:04.735 [2024-11-19 16:06:11.188380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.735 [2024-11-19 16:06:11.221023] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.735 [2024-11-19 16:06:11.311884] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.735 Malloc0 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.735 [2024-11-19 16:06:11.368704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:04.735 { 00:11:04.735 "params": { 00:11:04.735 "name": "Nvme$subsystem", 00:11:04.735 "trtype": "$TEST_TRANSPORT", 00:11:04.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.735 "adrfam": "ipv4", 00:11:04.735 "trsvcid": "$NVMF_PORT", 00:11:04.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.735 "hdgst": ${hdgst:-false}, 00:11:04.735 "ddgst": ${ddgst:-false} 00:11:04.735 }, 00:11:04.735 "method": "bdev_nvme_attach_controller" 00:11:04.735 } 00:11:04.735 EOF 00:11:04.735 )") 00:11:04.735 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:04.736 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:04.736 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:04.736 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:04.736 "params": { 00:11:04.736 "name": "Nvme1", 00:11:04.736 "trtype": "tcp", 00:11:04.736 "traddr": "10.0.0.3", 00:11:04.736 "adrfam": "ipv4", 00:11:04.736 "trsvcid": "4420", 00:11:04.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.736 "hdgst": false, 00:11:04.736 "ddgst": false 00:11:04.736 }, 00:11:04.736 "method": "bdev_nvme_attach_controller" 00:11:04.736 }' 00:11:04.736 [2024-11-19 16:06:11.430487] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:11:04.736 [2024-11-19 16:06:11.430587] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79828 ] 00:11:04.994 [2024-11-19 16:06:11.587258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:04.994 [2024-11-19 16:06:11.613848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.994 [2024-11-19 16:06:11.613998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.994 [2024-11-19 16:06:11.614003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.994 [2024-11-19 16:06:11.655007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:05.252 I/O targets: 00:11:05.252 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:05.252 00:11:05.252 00:11:05.252 CUnit - A unit testing framework for C - Version 2.1-3 00:11:05.252 http://cunit.sourceforge.net/ 00:11:05.252 00:11:05.252 00:11:05.252 Suite: bdevio tests on: Nvme1n1 00:11:05.252 Test: blockdev write read block ...passed 00:11:05.252 Test: blockdev write zeroes read block ...passed 00:11:05.252 Test: blockdev write zeroes read no split ...passed 00:11:05.252 Test: blockdev write zeroes read split ...passed 00:11:05.252 Test: blockdev write zeroes read split partial ...passed 00:11:05.252 Test: blockdev reset ...[2024-11-19 16:06:11.788855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:05.252 [2024-11-19 16:06:11.789146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22c88d0 (9): Bad file descriptor 00:11:05.252 [2024-11-19 16:06:11.804961] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:05.252 passed 00:11:05.252 Test: blockdev write read 8 blocks ...passed 00:11:05.252 Test: blockdev write read size > 128k ...passed 00:11:05.252 Test: blockdev write read invalid size ...passed 00:11:05.252 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:05.252 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:05.252 Test: blockdev write read max offset ...passed 00:11:05.252 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:05.252 Test: blockdev writev readv 8 blocks ...passed 00:11:05.252 Test: blockdev writev readv 30 x 1block ...passed 00:11:05.252 Test: blockdev writev readv block ...passed 00:11:05.252 Test: blockdev writev readv size > 128k ...passed 00:11:05.252 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:05.252 Test: blockdev comparev and writev ...[2024-11-19 16:06:11.815659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.252 [2024-11-19 16:06:11.815737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:05.252 [2024-11-19 16:06:11.815764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.252 [2024-11-19 16:06:11.815777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:05.252 [2024-11-19 16:06:11.816200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.252 [2024-11-19 16:06:11.816252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:05.252 [2024-11-19 16:06:11.816278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.252 [2024-11-19 16:06:11.816290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:05.252 [2024-11-19 16:06:11.816650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.252 [2024-11-19 16:06:11.816686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:05.252 [2024-11-19 16:06:11.816709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.252 [2024-11-19 16:06:11.816721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:05.252 [2024-11-19 16:06:11.817051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.252 [2024-11-19 16:06:11.817086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:05.252 [2024-11-19 16:06:11.817109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.252 [2024-11-19 16:06:11.817121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:05.252 passed 00:11:05.252 Test: blockdev nvme passthru rw ...passed 00:11:05.252 Test: blockdev nvme passthru vendor specific ...[2024-11-19 16:06:11.818505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.252 [2024-11-19 16:06:11.818537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:05.253 [2024-11-19 16:06:11.818904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.253 [2024-11-19 16:06:11.818940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:05.253 [2024-11-19 16:06:11.819302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.253 [2024-11-19 16:06:11.819338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:05.253 [2024-11-19 16:06:11.819580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.253 [2024-11-19 16:06:11.819716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:05.253 passed 00:11:05.253 Test: blockdev nvme admin passthru ...passed 00:11:05.253 Test: blockdev copy ...passed 00:11:05.253 00:11:05.253 Run Summary: Type Total Ran Passed Failed Inactive 00:11:05.253 suites 1 1 n/a 0 0 00:11:05.253 tests 23 23 23 0 0 00:11:05.253 asserts 152 152 152 0 n/a 00:11:05.253 00:11:05.253 Elapsed time = 0.161 seconds 00:11:05.253 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.253 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.253 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.253 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.253 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:05.253 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:05.253 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:05.253 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:05.511 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:05.511 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:05.511 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:05.511 16:06:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:05.511 rmmod nvme_tcp 00:11:05.511 rmmod nvme_fabrics 00:11:05.511 rmmod nvme_keyring 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 79805 ']' 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 79805 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 79805 ']' 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 79805 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79805 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:05.511 killing process with pid 79805 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79805' 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 79805 00:11:05.511 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 79805 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:05.770 00:11:05.770 real 0m2.185s 00:11:05.770 user 0m5.286s 00:11:05.770 sys 0m0.758s 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.770 ************************************ 00:11:05.770 16:06:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.770 END TEST nvmf_bdevio 00:11:05.770 ************************************ 00:11:06.028 16:06:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:06.028 00:11:06.028 real 2m28.012s 00:11:06.028 user 6m24.862s 00:11:06.028 sys 0m54.015s 00:11:06.028 16:06:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.028 ************************************ 00:11:06.028 16:06:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:06.028 END TEST nvmf_target_core 00:11:06.028 ************************************ 00:11:06.028 16:06:12 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:06.028 16:06:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.028 16:06:12 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.028 16:06:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:06.028 ************************************ 00:11:06.028 START TEST nvmf_target_extra 00:11:06.028 ************************************ 00:11:06.028 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:06.028 * Looking for test storage... 00:11:06.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:06.028 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.028 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.028 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:06.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.287 --rc genhtml_branch_coverage=1 00:11:06.287 --rc genhtml_function_coverage=1 00:11:06.287 --rc genhtml_legend=1 00:11:06.287 --rc geninfo_all_blocks=1 00:11:06.287 --rc geninfo_unexecuted_blocks=1 00:11:06.287 00:11:06.287 ' 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:06.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.287 --rc genhtml_branch_coverage=1 00:11:06.287 --rc genhtml_function_coverage=1 00:11:06.287 --rc genhtml_legend=1 00:11:06.287 --rc geninfo_all_blocks=1 00:11:06.287 --rc geninfo_unexecuted_blocks=1 00:11:06.287 00:11:06.287 ' 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:06.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.287 --rc genhtml_branch_coverage=1 00:11:06.287 --rc genhtml_function_coverage=1 00:11:06.287 --rc genhtml_legend=1 00:11:06.287 --rc geninfo_all_blocks=1 00:11:06.287 --rc geninfo_unexecuted_blocks=1 00:11:06.287 00:11:06.287 ' 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:06.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.287 --rc genhtml_branch_coverage=1 00:11:06.287 --rc genhtml_function_coverage=1 00:11:06.287 --rc genhtml_legend=1 00:11:06.287 --rc geninfo_all_blocks=1 00:11:06.287 --rc geninfo_unexecuted_blocks=1 00:11:06.287 00:11:06.287 ' 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.287 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.288 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:06.288 ************************************ 00:11:06.288 START TEST nvmf_auth_target 00:11:06.288 ************************************ 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:06.288 * Looking for test storage... 00:11:06.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:06.288 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:06.547 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:06.547 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:06.547 16:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:06.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.547 --rc genhtml_branch_coverage=1 00:11:06.547 --rc genhtml_function_coverage=1 00:11:06.547 --rc genhtml_legend=1 00:11:06.547 --rc geninfo_all_blocks=1 00:11:06.547 --rc geninfo_unexecuted_blocks=1 00:11:06.547 00:11:06.547 ' 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:06.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.547 --rc genhtml_branch_coverage=1 00:11:06.547 --rc genhtml_function_coverage=1 00:11:06.547 --rc genhtml_legend=1 00:11:06.547 --rc geninfo_all_blocks=1 00:11:06.547 --rc geninfo_unexecuted_blocks=1 00:11:06.547 00:11:06.547 ' 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:06.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.547 --rc genhtml_branch_coverage=1 00:11:06.547 --rc genhtml_function_coverage=1 00:11:06.547 --rc genhtml_legend=1 00:11:06.547 --rc geninfo_all_blocks=1 00:11:06.547 --rc geninfo_unexecuted_blocks=1 00:11:06.547 00:11:06.547 ' 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:06.547 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:06.547 --rc genhtml_branch_coverage=1 00:11:06.547 --rc genhtml_function_coverage=1 00:11:06.547 --rc genhtml_legend=1 00:11:06.547 --rc geninfo_all_blocks=1 00:11:06.547 --rc geninfo_unexecuted_blocks=1 00:11:06.547 00:11:06.547 ' 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.547 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:06.548 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:06.548 Cannot find device "nvmf_init_br" 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:06.548 Cannot find device "nvmf_init_br2" 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:06.548 Cannot find device "nvmf_tgt_br" 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:06.548 Cannot find device "nvmf_tgt_br2" 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:06.548 Cannot find device "nvmf_init_br" 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:06.548 Cannot find device "nvmf_init_br2" 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:06.548 Cannot find device "nvmf_tgt_br" 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:06.548 Cannot find device "nvmf_tgt_br2" 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:06.548 Cannot find device "nvmf_br" 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:06.548 Cannot find device "nvmf_init_if" 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:06.548 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:06.548 Cannot find device "nvmf_init_if2" 00:11:06.549 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:06.549 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:06.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.549 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:06.549 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:06.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:06.549 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:06.549 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:06.549 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:06.549 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:06.549 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:06.549 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:06.549 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:06.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:06.808 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.119 ms 00:11:06.808 00:11:06.808 --- 10.0.0.3 ping statistics --- 00:11:06.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.808 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:06.808 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:06.808 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:11:06.808 00:11:06.808 --- 10.0.0.4 ping statistics --- 00:11:06.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.808 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:06.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:06.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:11:06.808 00:11:06.808 --- 10.0.0.1 ping statistics --- 00:11:06.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.808 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:06.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:06.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:11:06.808 00:11:06.808 --- 10.0.0.2 ping statistics --- 00:11:06.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:06.808 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=80115 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 80115 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80115 ']' 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:06.808 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=80140 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=87d725c8f6bf7d6cc405dad2acefe31078ea2670d3b638ef 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kPz 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 87d725c8f6bf7d6cc405dad2acefe31078ea2670d3b638ef 0 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 87d725c8f6bf7d6cc405dad2acefe31078ea2670d3b638ef 0 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=87d725c8f6bf7d6cc405dad2acefe31078ea2670d3b638ef 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kPz 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kPz 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.kPz 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:07.446 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0a72d2642415e68dd73249181e83c348c057c471f7e972a99872407aaadded46 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.oPh 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0a72d2642415e68dd73249181e83c348c057c471f7e972a99872407aaadded46 3 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0a72d2642415e68dd73249181e83c348c057c471f7e972a99872407aaadded46 3 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0a72d2642415e68dd73249181e83c348c057c471f7e972a99872407aaadded46 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.oPh 00:11:07.447 16:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.oPh 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.oPh 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a105504f5956d600f6aa5e672ef8b3d7 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.DRp 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a105504f5956d600f6aa5e672ef8b3d7 1 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a105504f5956d600f6aa5e672ef8b3d7 1 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a105504f5956d600f6aa5e672ef8b3d7 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.DRp 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.DRp 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.DRp 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=16207573d8fd192935d206013a9ebc388c244adea47ec2eb 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1So 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 16207573d8fd192935d206013a9ebc388c244adea47ec2eb 2 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 16207573d8fd192935d206013a9ebc388c244adea47ec2eb 2 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=16207573d8fd192935d206013a9ebc388c244adea47ec2eb 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1So 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1So 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.1So 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=50ef9e6a6340a7cd4da544d37fe3d6c70b6a9baba411995a 00:11:07.447 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.E6E 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 50ef9e6a6340a7cd4da544d37fe3d6c70b6a9baba411995a 2 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 50ef9e6a6340a7cd4da544d37fe3d6c70b6a9baba411995a 2 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=50ef9e6a6340a7cd4da544d37fe3d6c70b6a9baba411995a 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.E6E 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.E6E 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.E6E 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a58d0e49ca4f232ca5629562a9908417 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dW8 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a58d0e49ca4f232ca5629562a9908417 1 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a58d0e49ca4f232ca5629562a9908417 1 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a58d0e49ca4f232ca5629562a9908417 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dW8 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dW8 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.dW8 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f52c68de89f7ab1001f9513fafa6b03d89fba6fdbbb6e77575f264391757d557 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.ptz 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f52c68de89f7ab1001f9513fafa6b03d89fba6fdbbb6e77575f264391757d557 3 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f52c68de89f7ab1001f9513fafa6b03d89fba6fdbbb6e77575f264391757d557 3 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f52c68de89f7ab1001f9513fafa6b03d89fba6fdbbb6e77575f264391757d557 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.ptz 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.ptz 00:11:07.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.ptz 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 80115 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80115 ']' 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.706 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.707 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.707 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.707 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:07.965 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.965 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:07.965 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 80140 /var/tmp/host.sock 00:11:07.965 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80140 ']' 00:11:07.965 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:11:07.965 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.965 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:07.965 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.965 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kPz 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.kPz 00:11:08.530 16:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.kPz 00:11:08.788 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.oPh ]] 00:11:08.788 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oPh 00:11:08.788 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.788 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.788 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.788 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oPh 00:11:08.788 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oPh 00:11:09.046 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:09.046 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.DRp 00:11:09.046 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.046 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.046 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.046 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.DRp 00:11:09.046 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.DRp 00:11:09.305 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.1So ]] 00:11:09.305 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1So 00:11:09.305 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.305 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.305 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.305 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1So 00:11:09.305 16:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1So 00:11:09.564 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:09.564 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.E6E 00:11:09.564 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.564 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.564 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.564 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.E6E 00:11:09.564 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.E6E 00:11:09.822 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.dW8 ]] 00:11:09.822 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dW8 00:11:09.822 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.822 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.822 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.822 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dW8 00:11:09.822 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dW8 00:11:10.080 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:10.080 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ptz 00:11:10.080 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.080 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.080 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.080 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.ptz 00:11:10.080 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.ptz 00:11:10.338 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:10.338 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:10.338 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:10.338 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:10.338 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:10.338 16:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.596 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:10.854 00:11:10.854 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.854 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:10.854 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:11.112 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:11.112 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:11.112 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.112 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.112 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.112 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:11.112 { 00:11:11.112 "cntlid": 1, 00:11:11.112 "qid": 0, 00:11:11.112 "state": "enabled", 00:11:11.112 "thread": "nvmf_tgt_poll_group_000", 00:11:11.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:11.112 "listen_address": { 00:11:11.112 "trtype": "TCP", 00:11:11.112 "adrfam": "IPv4", 00:11:11.112 "traddr": "10.0.0.3", 00:11:11.112 "trsvcid": "4420" 00:11:11.112 }, 00:11:11.112 "peer_address": { 00:11:11.112 "trtype": "TCP", 00:11:11.112 "adrfam": "IPv4", 00:11:11.112 "traddr": "10.0.0.1", 00:11:11.112 "trsvcid": "38276" 00:11:11.112 }, 00:11:11.112 "auth": { 00:11:11.112 "state": "completed", 00:11:11.112 "digest": "sha256", 00:11:11.112 "dhgroup": "null" 00:11:11.112 } 00:11:11.112 } 00:11:11.112 ]' 00:11:11.113 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:11.113 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:11.113 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:11.113 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:11.113 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:11.371 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:11.371 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:11.371 16:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:11.629 16:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:11.629 16:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.824 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.083 00:11:16.342 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.342 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.342 16:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.342 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.342 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.342 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.342 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.603 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.603 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.603 { 00:11:16.603 "cntlid": 3, 00:11:16.603 "qid": 0, 00:11:16.603 "state": "enabled", 00:11:16.603 "thread": "nvmf_tgt_poll_group_000", 00:11:16.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:16.603 "listen_address": { 00:11:16.603 "trtype": "TCP", 00:11:16.603 "adrfam": "IPv4", 00:11:16.603 "traddr": "10.0.0.3", 00:11:16.603 "trsvcid": "4420" 00:11:16.603 }, 00:11:16.603 "peer_address": { 00:11:16.603 "trtype": "TCP", 00:11:16.603 "adrfam": "IPv4", 00:11:16.603 "traddr": "10.0.0.1", 00:11:16.603 "trsvcid": "34920" 00:11:16.603 }, 00:11:16.603 "auth": { 00:11:16.603 "state": "completed", 00:11:16.603 "digest": "sha256", 00:11:16.603 "dhgroup": "null" 00:11:16.603 } 00:11:16.603 } 00:11:16.603 ]' 00:11:16.603 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.603 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:16.603 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.603 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:16.603 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.603 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.603 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.603 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.861 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:16.861 16:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:17.428 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.428 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:17.428 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.428 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.428 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.428 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.428 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:17.428 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.687 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:17.946 00:11:17.946 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:17.946 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:17.946 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.542 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.542 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.542 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.542 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.542 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.542 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.542 { 00:11:18.542 "cntlid": 5, 00:11:18.542 "qid": 0, 00:11:18.542 "state": "enabled", 00:11:18.542 "thread": "nvmf_tgt_poll_group_000", 00:11:18.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:18.543 "listen_address": { 00:11:18.543 "trtype": "TCP", 00:11:18.543 "adrfam": "IPv4", 00:11:18.543 "traddr": "10.0.0.3", 00:11:18.543 "trsvcid": "4420" 00:11:18.543 }, 00:11:18.543 "peer_address": { 00:11:18.543 "trtype": "TCP", 00:11:18.543 "adrfam": "IPv4", 00:11:18.543 "traddr": "10.0.0.1", 00:11:18.543 "trsvcid": "34954" 00:11:18.543 }, 00:11:18.543 "auth": { 00:11:18.543 "state": "completed", 00:11:18.543 "digest": "sha256", 00:11:18.543 "dhgroup": "null" 00:11:18.543 } 00:11:18.543 } 00:11:18.543 ]' 00:11:18.543 16:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.543 16:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:18.543 16:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.543 16:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:18.543 16:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.543 16:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.543 16:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.543 16:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:18.810 16:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:11:18.810 16:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:11:19.378 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:19.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:19.378 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:19.378 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.378 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.378 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.378 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:19.378 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:19.378 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:19.945 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:20.203 00:11:20.203 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.203 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:20.203 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.463 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:20.463 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:20.463 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.463 16:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.463 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.463 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:20.463 { 00:11:20.463 "cntlid": 7, 00:11:20.463 "qid": 0, 00:11:20.463 "state": "enabled", 00:11:20.463 "thread": "nvmf_tgt_poll_group_000", 00:11:20.463 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:20.463 "listen_address": { 00:11:20.463 "trtype": "TCP", 00:11:20.463 "adrfam": "IPv4", 00:11:20.463 "traddr": "10.0.0.3", 00:11:20.463 "trsvcid": "4420" 00:11:20.463 }, 00:11:20.463 "peer_address": { 00:11:20.463 "trtype": "TCP", 00:11:20.463 "adrfam": "IPv4", 00:11:20.463 "traddr": "10.0.0.1", 00:11:20.463 "trsvcid": "34974" 00:11:20.463 }, 00:11:20.463 "auth": { 00:11:20.463 "state": "completed", 00:11:20.463 "digest": "sha256", 00:11:20.463 "dhgroup": "null" 00:11:20.463 } 00:11:20.463 } 00:11:20.463 ]' 00:11:20.463 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:20.463 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:20.463 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:20.463 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:20.463 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:20.463 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:20.463 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:20.463 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.030 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:11:21.030 16:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:11:21.598 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.598 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:21.598 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.598 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.598 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.598 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:21.598 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.598 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:21.598 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:21.856 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.421 00:11:22.421 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:22.421 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:22.421 16:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:22.421 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:22.421 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:22.421 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.421 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.421 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.421 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:22.421 { 00:11:22.421 "cntlid": 9, 00:11:22.421 "qid": 0, 00:11:22.421 "state": "enabled", 00:11:22.421 "thread": "nvmf_tgt_poll_group_000", 00:11:22.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:22.421 "listen_address": { 00:11:22.421 "trtype": "TCP", 00:11:22.421 "adrfam": "IPv4", 00:11:22.421 "traddr": "10.0.0.3", 00:11:22.421 "trsvcid": "4420" 00:11:22.421 }, 00:11:22.421 "peer_address": { 00:11:22.421 "trtype": "TCP", 00:11:22.421 "adrfam": "IPv4", 00:11:22.421 "traddr": "10.0.0.1", 00:11:22.421 "trsvcid": "35014" 00:11:22.421 }, 00:11:22.421 "auth": { 00:11:22.421 "state": "completed", 00:11:22.421 "digest": "sha256", 00:11:22.421 "dhgroup": "ffdhe2048" 00:11:22.421 } 00:11:22.421 } 00:11:22.421 ]' 00:11:22.688 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:22.688 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:22.688 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:22.688 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:22.688 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:22.688 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:22.688 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:22.688 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:22.946 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:22.946 16:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:23.880 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:23.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.881 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.139 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.139 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.139 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.139 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.398 00:11:24.398 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:24.398 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:24.398 16:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:24.657 { 00:11:24.657 "cntlid": 11, 00:11:24.657 "qid": 0, 00:11:24.657 "state": "enabled", 00:11:24.657 "thread": "nvmf_tgt_poll_group_000", 00:11:24.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:24.657 "listen_address": { 00:11:24.657 "trtype": "TCP", 00:11:24.657 "adrfam": "IPv4", 00:11:24.657 "traddr": "10.0.0.3", 00:11:24.657 "trsvcid": "4420" 00:11:24.657 }, 00:11:24.657 "peer_address": { 00:11:24.657 "trtype": "TCP", 00:11:24.657 "adrfam": "IPv4", 00:11:24.657 "traddr": "10.0.0.1", 00:11:24.657 "trsvcid": "45478" 00:11:24.657 }, 00:11:24.657 "auth": { 00:11:24.657 "state": "completed", 00:11:24.657 "digest": "sha256", 00:11:24.657 "dhgroup": "ffdhe2048" 00:11:24.657 } 00:11:24.657 } 00:11:24.657 ]' 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:24.657 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:24.916 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:24.916 16:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:25.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:25.851 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:26.418 00:11:26.418 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:26.418 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:26.418 16:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:26.678 { 00:11:26.678 "cntlid": 13, 00:11:26.678 "qid": 0, 00:11:26.678 "state": "enabled", 00:11:26.678 "thread": "nvmf_tgt_poll_group_000", 00:11:26.678 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:26.678 "listen_address": { 00:11:26.678 "trtype": "TCP", 00:11:26.678 "adrfam": "IPv4", 00:11:26.678 "traddr": "10.0.0.3", 00:11:26.678 "trsvcid": "4420" 00:11:26.678 }, 00:11:26.678 "peer_address": { 00:11:26.678 "trtype": "TCP", 00:11:26.678 "adrfam": "IPv4", 00:11:26.678 "traddr": "10.0.0.1", 00:11:26.678 "trsvcid": "45500" 00:11:26.678 }, 00:11:26.678 "auth": { 00:11:26.678 "state": "completed", 00:11:26.678 "digest": "sha256", 00:11:26.678 "dhgroup": "ffdhe2048" 00:11:26.678 } 00:11:26.678 } 00:11:26.678 ]' 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:26.678 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.937 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:11:26.937 16:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:11:27.505 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:27.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:27.505 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:27.505 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.505 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.505 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.505 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:27.505 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:27.505 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:27.763 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:28.331 00:11:28.331 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:28.331 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:28.331 16:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:28.590 { 00:11:28.590 "cntlid": 15, 00:11:28.590 "qid": 0, 00:11:28.590 "state": "enabled", 00:11:28.590 "thread": "nvmf_tgt_poll_group_000", 00:11:28.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:28.590 "listen_address": { 00:11:28.590 "trtype": "TCP", 00:11:28.590 "adrfam": "IPv4", 00:11:28.590 "traddr": "10.0.0.3", 00:11:28.590 "trsvcid": "4420" 00:11:28.590 }, 00:11:28.590 "peer_address": { 00:11:28.590 "trtype": "TCP", 00:11:28.590 "adrfam": "IPv4", 00:11:28.590 "traddr": "10.0.0.1", 00:11:28.590 "trsvcid": "45536" 00:11:28.590 }, 00:11:28.590 "auth": { 00:11:28.590 "state": "completed", 00:11:28.590 "digest": "sha256", 00:11:28.590 "dhgroup": "ffdhe2048" 00:11:28.590 } 00:11:28.590 } 00:11:28.590 ]' 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:28.590 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.850 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:11:28.850 16:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:29.785 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:30.352 00:11:30.352 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:30.352 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:30.352 16:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:30.610 { 00:11:30.610 "cntlid": 17, 00:11:30.610 "qid": 0, 00:11:30.610 "state": "enabled", 00:11:30.610 "thread": "nvmf_tgt_poll_group_000", 00:11:30.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:30.610 "listen_address": { 00:11:30.610 "trtype": "TCP", 00:11:30.610 "adrfam": "IPv4", 00:11:30.610 "traddr": "10.0.0.3", 00:11:30.610 "trsvcid": "4420" 00:11:30.610 }, 00:11:30.610 "peer_address": { 00:11:30.610 "trtype": "TCP", 00:11:30.610 "adrfam": "IPv4", 00:11:30.610 "traddr": "10.0.0.1", 00:11:30.610 "trsvcid": "45564" 00:11:30.610 }, 00:11:30.610 "auth": { 00:11:30.610 "state": "completed", 00:11:30.610 "digest": "sha256", 00:11:30.610 "dhgroup": "ffdhe3072" 00:11:30.610 } 00:11:30.610 } 00:11:30.610 ]' 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.610 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.870 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:30.870 16:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:31.805 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.805 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:31.805 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.805 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.805 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.805 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.805 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:31.806 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:32.382 00:11:32.382 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.382 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.382 16:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.382 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.382 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.382 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.382 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.382 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.382 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.382 { 00:11:32.382 "cntlid": 19, 00:11:32.382 "qid": 0, 00:11:32.382 "state": "enabled", 00:11:32.382 "thread": "nvmf_tgt_poll_group_000", 00:11:32.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:32.382 "listen_address": { 00:11:32.382 "trtype": "TCP", 00:11:32.382 "adrfam": "IPv4", 00:11:32.382 "traddr": "10.0.0.3", 00:11:32.382 "trsvcid": "4420" 00:11:32.382 }, 00:11:32.382 "peer_address": { 00:11:32.382 "trtype": "TCP", 00:11:32.382 "adrfam": "IPv4", 00:11:32.382 "traddr": "10.0.0.1", 00:11:32.382 "trsvcid": "45598" 00:11:32.382 }, 00:11:32.382 "auth": { 00:11:32.382 "state": "completed", 00:11:32.382 "digest": "sha256", 00:11:32.382 "dhgroup": "ffdhe3072" 00:11:32.382 } 00:11:32.382 } 00:11:32.382 ]' 00:11:32.382 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.659 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:32.659 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.659 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:32.659 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.659 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.659 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.659 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.917 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:32.917 16:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:33.485 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.744 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:33.744 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.744 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.744 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.744 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.744 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:33.744 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.003 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:34.261 00:11:34.261 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.261 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.261 16:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.520 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.520 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.520 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.520 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.520 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.520 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.520 { 00:11:34.520 "cntlid": 21, 00:11:34.520 "qid": 0, 00:11:34.520 "state": "enabled", 00:11:34.520 "thread": "nvmf_tgt_poll_group_000", 00:11:34.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:34.520 "listen_address": { 00:11:34.520 "trtype": "TCP", 00:11:34.520 "adrfam": "IPv4", 00:11:34.520 "traddr": "10.0.0.3", 00:11:34.520 "trsvcid": "4420" 00:11:34.520 }, 00:11:34.520 "peer_address": { 00:11:34.520 "trtype": "TCP", 00:11:34.520 "adrfam": "IPv4", 00:11:34.520 "traddr": "10.0.0.1", 00:11:34.520 "trsvcid": "45106" 00:11:34.520 }, 00:11:34.520 "auth": { 00:11:34.520 "state": "completed", 00:11:34.520 "digest": "sha256", 00:11:34.520 "dhgroup": "ffdhe3072" 00:11:34.520 } 00:11:34.520 } 00:11:34.520 ]' 00:11:34.520 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.520 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:34.520 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.779 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:34.779 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.779 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.779 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.779 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.039 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:11:35.039 16:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:11:35.606 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.607 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:35.607 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.607 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.607 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.607 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.607 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:35.607 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:35.866 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:36.125 00:11:36.125 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.125 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.125 16:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.384 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.384 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.384 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.384 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.384 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.384 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.384 { 00:11:36.384 "cntlid": 23, 00:11:36.384 "qid": 0, 00:11:36.384 "state": "enabled", 00:11:36.384 "thread": "nvmf_tgt_poll_group_000", 00:11:36.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:36.384 "listen_address": { 00:11:36.384 "trtype": "TCP", 00:11:36.384 "adrfam": "IPv4", 00:11:36.384 "traddr": "10.0.0.3", 00:11:36.384 "trsvcid": "4420" 00:11:36.384 }, 00:11:36.384 "peer_address": { 00:11:36.384 "trtype": "TCP", 00:11:36.384 "adrfam": "IPv4", 00:11:36.384 "traddr": "10.0.0.1", 00:11:36.384 "trsvcid": "45134" 00:11:36.384 }, 00:11:36.384 "auth": { 00:11:36.384 "state": "completed", 00:11:36.384 "digest": "sha256", 00:11:36.384 "dhgroup": "ffdhe3072" 00:11:36.384 } 00:11:36.384 } 00:11:36.384 ]' 00:11:36.384 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.643 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:36.643 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:36.643 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:36.643 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:36.643 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:36.643 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:36.643 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:36.901 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:11:36.901 16:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:11:37.468 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:37.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:37.727 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:37.727 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.727 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.727 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.727 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:37.727 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:37.727 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:37.727 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:37.985 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:38.244 00:11:38.244 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.244 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:38.244 16:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.503 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:38.503 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:38.503 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.503 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.503 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.503 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:38.503 { 00:11:38.503 "cntlid": 25, 00:11:38.503 "qid": 0, 00:11:38.503 "state": "enabled", 00:11:38.503 "thread": "nvmf_tgt_poll_group_000", 00:11:38.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:38.503 "listen_address": { 00:11:38.503 "trtype": "TCP", 00:11:38.503 "adrfam": "IPv4", 00:11:38.503 "traddr": "10.0.0.3", 00:11:38.503 "trsvcid": "4420" 00:11:38.503 }, 00:11:38.503 "peer_address": { 00:11:38.503 "trtype": "TCP", 00:11:38.503 "adrfam": "IPv4", 00:11:38.503 "traddr": "10.0.0.1", 00:11:38.503 "trsvcid": "45168" 00:11:38.503 }, 00:11:38.503 "auth": { 00:11:38.503 "state": "completed", 00:11:38.503 "digest": "sha256", 00:11:38.503 "dhgroup": "ffdhe4096" 00:11:38.503 } 00:11:38.503 } 00:11:38.503 ]' 00:11:38.503 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:38.503 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:38.503 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:38.762 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:38.762 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:38.762 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:38.762 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:38.762 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.021 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:39.021 16:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:39.588 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:39.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:39.589 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:39.589 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.589 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.589 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.589 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:39.589 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:39.589 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:39.848 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:40.414 00:11:40.414 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:40.414 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:40.414 16:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:40.414 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:40.414 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:40.414 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.414 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.414 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.414 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:40.414 { 00:11:40.414 "cntlid": 27, 00:11:40.414 "qid": 0, 00:11:40.414 "state": "enabled", 00:11:40.414 "thread": "nvmf_tgt_poll_group_000", 00:11:40.414 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:40.414 "listen_address": { 00:11:40.414 "trtype": "TCP", 00:11:40.414 "adrfam": "IPv4", 00:11:40.414 "traddr": "10.0.0.3", 00:11:40.414 "trsvcid": "4420" 00:11:40.414 }, 00:11:40.415 "peer_address": { 00:11:40.415 "trtype": "TCP", 00:11:40.415 "adrfam": "IPv4", 00:11:40.415 "traddr": "10.0.0.1", 00:11:40.415 "trsvcid": "45196" 00:11:40.415 }, 00:11:40.415 "auth": { 00:11:40.415 "state": "completed", 00:11:40.415 "digest": "sha256", 00:11:40.415 "dhgroup": "ffdhe4096" 00:11:40.415 } 00:11:40.415 } 00:11:40.415 ]' 00:11:40.415 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:40.673 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:40.673 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:40.673 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:40.673 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:40.673 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:40.673 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:40.673 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.931 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:40.931 16:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:41.501 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:41.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:41.501 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:41.501 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.501 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.501 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.501 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:41.501 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:41.501 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:41.761 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:42.020 00:11:42.020 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:42.020 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:42.020 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:42.588 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:42.588 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:42.588 16:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.588 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.588 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.588 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:42.588 { 00:11:42.588 "cntlid": 29, 00:11:42.588 "qid": 0, 00:11:42.588 "state": "enabled", 00:11:42.588 "thread": "nvmf_tgt_poll_group_000", 00:11:42.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:42.588 "listen_address": { 00:11:42.588 "trtype": "TCP", 00:11:42.588 "adrfam": "IPv4", 00:11:42.588 "traddr": "10.0.0.3", 00:11:42.588 "trsvcid": "4420" 00:11:42.588 }, 00:11:42.588 "peer_address": { 00:11:42.588 "trtype": "TCP", 00:11:42.588 "adrfam": "IPv4", 00:11:42.588 "traddr": "10.0.0.1", 00:11:42.588 "trsvcid": "45232" 00:11:42.588 }, 00:11:42.588 "auth": { 00:11:42.588 "state": "completed", 00:11:42.588 "digest": "sha256", 00:11:42.588 "dhgroup": "ffdhe4096" 00:11:42.588 } 00:11:42.588 } 00:11:42.588 ]' 00:11:42.588 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:42.588 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:42.588 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:42.588 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:42.588 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:42.588 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:42.588 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:42.588 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:42.847 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:11:42.847 16:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:11:43.414 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:43.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:43.414 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:43.414 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.414 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.414 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.414 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:43.414 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:43.414 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.672 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:43.930 00:11:44.189 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:44.189 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:44.189 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:44.189 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:44.189 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:44.189 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.189 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.448 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.448 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:44.448 { 00:11:44.448 "cntlid": 31, 00:11:44.448 "qid": 0, 00:11:44.448 "state": "enabled", 00:11:44.448 "thread": "nvmf_tgt_poll_group_000", 00:11:44.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:44.448 "listen_address": { 00:11:44.448 "trtype": "TCP", 00:11:44.448 "adrfam": "IPv4", 00:11:44.448 "traddr": "10.0.0.3", 00:11:44.448 "trsvcid": "4420" 00:11:44.448 }, 00:11:44.448 "peer_address": { 00:11:44.448 "trtype": "TCP", 00:11:44.448 "adrfam": "IPv4", 00:11:44.448 "traddr": "10.0.0.1", 00:11:44.448 "trsvcid": "49984" 00:11:44.448 }, 00:11:44.448 "auth": { 00:11:44.448 "state": "completed", 00:11:44.448 "digest": "sha256", 00:11:44.448 "dhgroup": "ffdhe4096" 00:11:44.448 } 00:11:44.448 } 00:11:44.448 ]' 00:11:44.448 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:44.448 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:44.448 16:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:44.448 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:44.448 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:44.448 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:44.448 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:44.448 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.707 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:11:44.707 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:11:45.294 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.294 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:45.294 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.294 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.294 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.294 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:45.294 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.294 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.294 16:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:45.565 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:45.565 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.565 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:45.565 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:45.565 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:45.565 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.565 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.565 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.565 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.823 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.824 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.824 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:45.824 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:46.082 00:11:46.082 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:46.082 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:46.082 16:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:46.340 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:46.340 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:46.340 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.340 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.340 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.340 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:46.340 { 00:11:46.340 "cntlid": 33, 00:11:46.340 "qid": 0, 00:11:46.340 "state": "enabled", 00:11:46.340 "thread": "nvmf_tgt_poll_group_000", 00:11:46.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:46.340 "listen_address": { 00:11:46.340 "trtype": "TCP", 00:11:46.340 "adrfam": "IPv4", 00:11:46.340 "traddr": "10.0.0.3", 00:11:46.340 "trsvcid": "4420" 00:11:46.340 }, 00:11:46.340 "peer_address": { 00:11:46.340 "trtype": "TCP", 00:11:46.340 "adrfam": "IPv4", 00:11:46.340 "traddr": "10.0.0.1", 00:11:46.340 "trsvcid": "50014" 00:11:46.340 }, 00:11:46.340 "auth": { 00:11:46.340 "state": "completed", 00:11:46.340 "digest": "sha256", 00:11:46.340 "dhgroup": "ffdhe6144" 00:11:46.340 } 00:11:46.340 } 00:11:46.340 ]' 00:11:46.340 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:46.600 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:46.600 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:46.600 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:46.600 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.600 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.600 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.600 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.859 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:46.859 16:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:47.427 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.427 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:47.427 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.427 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.427 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.427 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.427 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.427 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:47.686 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:48.252 00:11:48.252 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.252 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:48.252 16:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:48.511 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.511 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.511 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.511 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.511 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.511 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.511 { 00:11:48.511 "cntlid": 35, 00:11:48.511 "qid": 0, 00:11:48.511 "state": "enabled", 00:11:48.511 "thread": "nvmf_tgt_poll_group_000", 00:11:48.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:48.511 "listen_address": { 00:11:48.511 "trtype": "TCP", 00:11:48.511 "adrfam": "IPv4", 00:11:48.511 "traddr": "10.0.0.3", 00:11:48.511 "trsvcid": "4420" 00:11:48.511 }, 00:11:48.511 "peer_address": { 00:11:48.511 "trtype": "TCP", 00:11:48.511 "adrfam": "IPv4", 00:11:48.511 "traddr": "10.0.0.1", 00:11:48.511 "trsvcid": "50042" 00:11:48.511 }, 00:11:48.511 "auth": { 00:11:48.511 "state": "completed", 00:11:48.511 "digest": "sha256", 00:11:48.511 "dhgroup": "ffdhe6144" 00:11:48.511 } 00:11:48.511 } 00:11:48.511 ]' 00:11:48.511 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.511 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:48.511 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.511 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:48.511 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.770 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.770 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.770 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:49.028 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:49.028 16:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:49.597 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.597 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:49.597 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.597 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.597 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.597 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.597 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:49.597 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:49.856 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:50.423 00:11:50.423 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.423 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.423 16:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.682 { 00:11:50.682 "cntlid": 37, 00:11:50.682 "qid": 0, 00:11:50.682 "state": "enabled", 00:11:50.682 "thread": "nvmf_tgt_poll_group_000", 00:11:50.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:50.682 "listen_address": { 00:11:50.682 "trtype": "TCP", 00:11:50.682 "adrfam": "IPv4", 00:11:50.682 "traddr": "10.0.0.3", 00:11:50.682 "trsvcid": "4420" 00:11:50.682 }, 00:11:50.682 "peer_address": { 00:11:50.682 "trtype": "TCP", 00:11:50.682 "adrfam": "IPv4", 00:11:50.682 "traddr": "10.0.0.1", 00:11:50.682 "trsvcid": "50062" 00:11:50.682 }, 00:11:50.682 "auth": { 00:11:50.682 "state": "completed", 00:11:50.682 "digest": "sha256", 00:11:50.682 "dhgroup": "ffdhe6144" 00:11:50.682 } 00:11:50.682 } 00:11:50.682 ]' 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.682 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.942 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:11:50.942 16:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:11:51.508 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.508 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:51.508 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.508 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.767 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.767 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.767 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:51.767 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.026 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:52.285 00:11:52.285 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.285 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.285 16:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.852 { 00:11:52.852 "cntlid": 39, 00:11:52.852 "qid": 0, 00:11:52.852 "state": "enabled", 00:11:52.852 "thread": "nvmf_tgt_poll_group_000", 00:11:52.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:52.852 "listen_address": { 00:11:52.852 "trtype": "TCP", 00:11:52.852 "adrfam": "IPv4", 00:11:52.852 "traddr": "10.0.0.3", 00:11:52.852 "trsvcid": "4420" 00:11:52.852 }, 00:11:52.852 "peer_address": { 00:11:52.852 "trtype": "TCP", 00:11:52.852 "adrfam": "IPv4", 00:11:52.852 "traddr": "10.0.0.1", 00:11:52.852 "trsvcid": "50082" 00:11:52.852 }, 00:11:52.852 "auth": { 00:11:52.852 "state": "completed", 00:11:52.852 "digest": "sha256", 00:11:52.852 "dhgroup": "ffdhe6144" 00:11:52.852 } 00:11:52.852 } 00:11:52.852 ]' 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.852 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.111 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:11:53.111 16:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:11:53.678 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:53.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:53.678 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:53.678 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.678 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.678 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.678 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:53.678 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:53.678 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:53.678 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:53.937 16:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:54.873 00:11:54.873 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.873 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.873 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:54.873 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:54.873 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:54.873 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.873 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.132 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.132 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.132 { 00:11:55.132 "cntlid": 41, 00:11:55.132 "qid": 0, 00:11:55.132 "state": "enabled", 00:11:55.132 "thread": "nvmf_tgt_poll_group_000", 00:11:55.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:55.132 "listen_address": { 00:11:55.132 "trtype": "TCP", 00:11:55.132 "adrfam": "IPv4", 00:11:55.132 "traddr": "10.0.0.3", 00:11:55.132 "trsvcid": "4420" 00:11:55.132 }, 00:11:55.132 "peer_address": { 00:11:55.132 "trtype": "TCP", 00:11:55.132 "adrfam": "IPv4", 00:11:55.132 "traddr": "10.0.0.1", 00:11:55.132 "trsvcid": "55206" 00:11:55.132 }, 00:11:55.132 "auth": { 00:11:55.132 "state": "completed", 00:11:55.132 "digest": "sha256", 00:11:55.132 "dhgroup": "ffdhe8192" 00:11:55.132 } 00:11:55.132 } 00:11:55.132 ]' 00:11:55.132 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.132 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:55.132 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.132 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:55.132 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.132 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.132 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.132 16:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.390 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:55.390 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:11:55.958 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.958 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:55.958 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.958 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.958 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.958 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.958 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:55.958 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.526 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:56.527 16:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:57.093 00:11:57.094 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.094 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.094 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.352 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.352 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.352 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.352 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.352 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.352 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.352 { 00:11:57.352 "cntlid": 43, 00:11:57.352 "qid": 0, 00:11:57.352 "state": "enabled", 00:11:57.352 "thread": "nvmf_tgt_poll_group_000", 00:11:57.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:57.352 "listen_address": { 00:11:57.352 "trtype": "TCP", 00:11:57.352 "adrfam": "IPv4", 00:11:57.352 "traddr": "10.0.0.3", 00:11:57.352 "trsvcid": "4420" 00:11:57.352 }, 00:11:57.352 "peer_address": { 00:11:57.352 "trtype": "TCP", 00:11:57.352 "adrfam": "IPv4", 00:11:57.352 "traddr": "10.0.0.1", 00:11:57.352 "trsvcid": "55230" 00:11:57.352 }, 00:11:57.352 "auth": { 00:11:57.352 "state": "completed", 00:11:57.352 "digest": "sha256", 00:11:57.352 "dhgroup": "ffdhe8192" 00:11:57.352 } 00:11:57.352 } 00:11:57.352 ]' 00:11:57.352 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.352 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.352 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.352 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:57.352 16:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.352 16:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.352 16:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.352 16:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.611 16:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:57.611 16:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:11:58.590 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.590 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:11:58.590 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.590 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.590 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.590 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.590 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:58.590 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:58.849 16:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:59.416 00:11:59.416 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.416 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.416 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.675 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.675 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.675 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.675 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.675 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.675 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.675 { 00:11:59.675 "cntlid": 45, 00:11:59.675 "qid": 0, 00:11:59.675 "state": "enabled", 00:11:59.675 "thread": "nvmf_tgt_poll_group_000", 00:11:59.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:11:59.675 "listen_address": { 00:11:59.675 "trtype": "TCP", 00:11:59.675 "adrfam": "IPv4", 00:11:59.675 "traddr": "10.0.0.3", 00:11:59.675 "trsvcid": "4420" 00:11:59.675 }, 00:11:59.675 "peer_address": { 00:11:59.675 "trtype": "TCP", 00:11:59.675 "adrfam": "IPv4", 00:11:59.675 "traddr": "10.0.0.1", 00:11:59.675 "trsvcid": "55256" 00:11:59.675 }, 00:11:59.675 "auth": { 00:11:59.675 "state": "completed", 00:11:59.675 "digest": "sha256", 00:11:59.675 "dhgroup": "ffdhe8192" 00:11:59.675 } 00:11:59.675 } 00:11:59.675 ]' 00:11:59.675 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.675 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:59.675 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.933 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:59.933 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.933 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.933 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.933 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.192 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:00.192 16:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:01.129 16:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:02.065 00:12:02.065 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.065 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.065 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.065 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.065 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.065 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.065 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.065 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.065 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.065 { 00:12:02.065 "cntlid": 47, 00:12:02.065 "qid": 0, 00:12:02.065 "state": "enabled", 00:12:02.065 "thread": "nvmf_tgt_poll_group_000", 00:12:02.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:02.065 "listen_address": { 00:12:02.065 "trtype": "TCP", 00:12:02.065 "adrfam": "IPv4", 00:12:02.065 "traddr": "10.0.0.3", 00:12:02.065 "trsvcid": "4420" 00:12:02.065 }, 00:12:02.065 "peer_address": { 00:12:02.065 "trtype": "TCP", 00:12:02.065 "adrfam": "IPv4", 00:12:02.065 "traddr": "10.0.0.1", 00:12:02.065 "trsvcid": "55280" 00:12:02.065 }, 00:12:02.065 "auth": { 00:12:02.065 "state": "completed", 00:12:02.065 "digest": "sha256", 00:12:02.065 "dhgroup": "ffdhe8192" 00:12:02.065 } 00:12:02.065 } 00:12:02.065 ]' 00:12:02.065 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.325 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:02.325 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.325 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:02.325 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.325 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.325 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.325 16:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.584 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:02.584 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:03.519 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.519 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:03.519 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.519 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.519 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.519 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:03.519 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:03.519 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.519 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:03.519 16:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:03.779 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:04.038 00:12:04.038 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.038 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.038 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.296 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.296 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.296 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.296 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.296 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.296 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.296 { 00:12:04.296 "cntlid": 49, 00:12:04.296 "qid": 0, 00:12:04.296 "state": "enabled", 00:12:04.296 "thread": "nvmf_tgt_poll_group_000", 00:12:04.296 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:04.296 "listen_address": { 00:12:04.296 "trtype": "TCP", 00:12:04.296 "adrfam": "IPv4", 00:12:04.296 "traddr": "10.0.0.3", 00:12:04.296 "trsvcid": "4420" 00:12:04.296 }, 00:12:04.296 "peer_address": { 00:12:04.296 "trtype": "TCP", 00:12:04.296 "adrfam": "IPv4", 00:12:04.296 "traddr": "10.0.0.1", 00:12:04.296 "trsvcid": "53362" 00:12:04.296 }, 00:12:04.296 "auth": { 00:12:04.296 "state": "completed", 00:12:04.296 "digest": "sha384", 00:12:04.296 "dhgroup": "null" 00:12:04.296 } 00:12:04.296 } 00:12:04.296 ]' 00:12:04.296 16:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.555 16:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:04.555 16:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.555 16:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:04.555 16:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.555 16:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.555 16:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.555 16:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:04.815 16:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:04.815 16:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:05.751 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.751 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:05.751 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.751 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.751 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.751 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.751 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:05.751 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:06.010 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:06.010 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.010 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:06.010 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:06.010 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:06.011 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.011 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.011 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.011 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.011 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.011 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.011 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.011 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:06.269 00:12:06.269 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.270 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.270 16:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:06.838 { 00:12:06.838 "cntlid": 51, 00:12:06.838 "qid": 0, 00:12:06.838 "state": "enabled", 00:12:06.838 "thread": "nvmf_tgt_poll_group_000", 00:12:06.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:06.838 "listen_address": { 00:12:06.838 "trtype": "TCP", 00:12:06.838 "adrfam": "IPv4", 00:12:06.838 "traddr": "10.0.0.3", 00:12:06.838 "trsvcid": "4420" 00:12:06.838 }, 00:12:06.838 "peer_address": { 00:12:06.838 "trtype": "TCP", 00:12:06.838 "adrfam": "IPv4", 00:12:06.838 "traddr": "10.0.0.1", 00:12:06.838 "trsvcid": "53388" 00:12:06.838 }, 00:12:06.838 "auth": { 00:12:06.838 "state": "completed", 00:12:06.838 "digest": "sha384", 00:12:06.838 "dhgroup": "null" 00:12:06.838 } 00:12:06.838 } 00:12:06.838 ]' 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:06.838 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.405 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:07.405 16:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:07.972 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:07.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:07.972 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:07.972 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.972 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.972 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.972 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:07.972 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:07.972 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.231 16:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:08.798 00:12:08.798 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.798 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:08.798 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.057 { 00:12:09.057 "cntlid": 53, 00:12:09.057 "qid": 0, 00:12:09.057 "state": "enabled", 00:12:09.057 "thread": "nvmf_tgt_poll_group_000", 00:12:09.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:09.057 "listen_address": { 00:12:09.057 "trtype": "TCP", 00:12:09.057 "adrfam": "IPv4", 00:12:09.057 "traddr": "10.0.0.3", 00:12:09.057 "trsvcid": "4420" 00:12:09.057 }, 00:12:09.057 "peer_address": { 00:12:09.057 "trtype": "TCP", 00:12:09.057 "adrfam": "IPv4", 00:12:09.057 "traddr": "10.0.0.1", 00:12:09.057 "trsvcid": "53412" 00:12:09.057 }, 00:12:09.057 "auth": { 00:12:09.057 "state": "completed", 00:12:09.057 "digest": "sha384", 00:12:09.057 "dhgroup": "null" 00:12:09.057 } 00:12:09.057 } 00:12:09.057 ]' 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.057 16:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.625 16:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:09.625 16:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:10.192 16:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.193 16:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:10.193 16:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.193 16:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.193 16:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.193 16:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.193 16:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:10.193 16:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.452 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:10.710 00:12:10.710 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:10.710 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:10.710 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:10.969 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:10.969 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:10.969 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.969 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.969 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.969 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:10.969 { 00:12:10.969 "cntlid": 55, 00:12:10.969 "qid": 0, 00:12:10.969 "state": "enabled", 00:12:10.969 "thread": "nvmf_tgt_poll_group_000", 00:12:10.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:10.969 "listen_address": { 00:12:10.969 "trtype": "TCP", 00:12:10.969 "adrfam": "IPv4", 00:12:10.969 "traddr": "10.0.0.3", 00:12:10.969 "trsvcid": "4420" 00:12:10.969 }, 00:12:10.969 "peer_address": { 00:12:10.969 "trtype": "TCP", 00:12:10.969 "adrfam": "IPv4", 00:12:10.969 "traddr": "10.0.0.1", 00:12:10.969 "trsvcid": "53428" 00:12:10.969 }, 00:12:10.969 "auth": { 00:12:10.969 "state": "completed", 00:12:10.969 "digest": "sha384", 00:12:10.969 "dhgroup": "null" 00:12:10.969 } 00:12:10.969 } 00:12:10.969 ]' 00:12:10.969 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.228 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.228 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.228 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:11.228 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.228 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.228 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.228 16:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.490 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:11.490 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:12.058 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.317 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:12.317 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.317 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.317 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.317 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:12.317 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.317 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:12.317 16:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.588 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:12.852 00:12:12.852 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:12.852 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:12.852 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.111 { 00:12:13.111 "cntlid": 57, 00:12:13.111 "qid": 0, 00:12:13.111 "state": "enabled", 00:12:13.111 "thread": "nvmf_tgt_poll_group_000", 00:12:13.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:13.111 "listen_address": { 00:12:13.111 "trtype": "TCP", 00:12:13.111 "adrfam": "IPv4", 00:12:13.111 "traddr": "10.0.0.3", 00:12:13.111 "trsvcid": "4420" 00:12:13.111 }, 00:12:13.111 "peer_address": { 00:12:13.111 "trtype": "TCP", 00:12:13.111 "adrfam": "IPv4", 00:12:13.111 "traddr": "10.0.0.1", 00:12:13.111 "trsvcid": "53452" 00:12:13.111 }, 00:12:13.111 "auth": { 00:12:13.111 "state": "completed", 00:12:13.111 "digest": "sha384", 00:12:13.111 "dhgroup": "ffdhe2048" 00:12:13.111 } 00:12:13.111 } 00:12:13.111 ]' 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.111 16:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.370 16:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:13.370 16:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:14.306 16:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.306 16:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:14.306 16:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.306 16:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.306 16:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.306 16:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.306 16:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:14.307 16:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.566 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:14.825 00:12:14.825 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.825 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.825 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.084 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.084 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.084 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.084 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.084 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.084 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.084 { 00:12:15.084 "cntlid": 59, 00:12:15.084 "qid": 0, 00:12:15.084 "state": "enabled", 00:12:15.084 "thread": "nvmf_tgt_poll_group_000", 00:12:15.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:15.084 "listen_address": { 00:12:15.084 "trtype": "TCP", 00:12:15.084 "adrfam": "IPv4", 00:12:15.084 "traddr": "10.0.0.3", 00:12:15.084 "trsvcid": "4420" 00:12:15.084 }, 00:12:15.084 "peer_address": { 00:12:15.084 "trtype": "TCP", 00:12:15.084 "adrfam": "IPv4", 00:12:15.084 "traddr": "10.0.0.1", 00:12:15.084 "trsvcid": "56132" 00:12:15.084 }, 00:12:15.084 "auth": { 00:12:15.084 "state": "completed", 00:12:15.084 "digest": "sha384", 00:12:15.084 "dhgroup": "ffdhe2048" 00:12:15.084 } 00:12:15.084 } 00:12:15.084 ]' 00:12:15.084 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.343 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.343 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.343 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:15.343 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.343 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.343 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.343 16:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.602 16:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:15.602 16:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:16.167 16:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.167 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.167 16:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:16.167 16:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.167 16:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.167 16:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.167 16:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.167 16:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:16.167 16:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.734 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:16.992 00:12:16.992 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:16.992 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:16.992 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.250 { 00:12:17.250 "cntlid": 61, 00:12:17.250 "qid": 0, 00:12:17.250 "state": "enabled", 00:12:17.250 "thread": "nvmf_tgt_poll_group_000", 00:12:17.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:17.250 "listen_address": { 00:12:17.250 "trtype": "TCP", 00:12:17.250 "adrfam": "IPv4", 00:12:17.250 "traddr": "10.0.0.3", 00:12:17.250 "trsvcid": "4420" 00:12:17.250 }, 00:12:17.250 "peer_address": { 00:12:17.250 "trtype": "TCP", 00:12:17.250 "adrfam": "IPv4", 00:12:17.250 "traddr": "10.0.0.1", 00:12:17.250 "trsvcid": "56168" 00:12:17.250 }, 00:12:17.250 "auth": { 00:12:17.250 "state": "completed", 00:12:17.250 "digest": "sha384", 00:12:17.250 "dhgroup": "ffdhe2048" 00:12:17.250 } 00:12:17.250 } 00:12:17.250 ]' 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.250 16:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.523 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:17.523 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:18.501 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.501 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:18.501 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.501 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.501 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.501 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.501 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:18.501 16:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:18.760 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:19.019 00:12:19.019 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:19.019 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.019 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.278 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.278 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.278 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.278 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.278 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.278 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.278 { 00:12:19.278 "cntlid": 63, 00:12:19.278 "qid": 0, 00:12:19.278 "state": "enabled", 00:12:19.278 "thread": "nvmf_tgt_poll_group_000", 00:12:19.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:19.278 "listen_address": { 00:12:19.278 "trtype": "TCP", 00:12:19.278 "adrfam": "IPv4", 00:12:19.278 "traddr": "10.0.0.3", 00:12:19.278 "trsvcid": "4420" 00:12:19.278 }, 00:12:19.278 "peer_address": { 00:12:19.278 "trtype": "TCP", 00:12:19.278 "adrfam": "IPv4", 00:12:19.278 "traddr": "10.0.0.1", 00:12:19.278 "trsvcid": "56176" 00:12:19.278 }, 00:12:19.278 "auth": { 00:12:19.278 "state": "completed", 00:12:19.278 "digest": "sha384", 00:12:19.278 "dhgroup": "ffdhe2048" 00:12:19.278 } 00:12:19.278 } 00:12:19.278 ]' 00:12:19.278 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.278 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.278 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.278 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:19.278 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.537 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.537 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.537 16:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.796 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:19.796 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:20.364 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.364 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:20.364 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.364 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.364 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.364 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:20.364 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.364 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:20.364 16:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:20.623 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:21.190 00:12:21.190 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.190 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.190 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.449 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.449 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.449 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.449 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.449 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.449 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.449 { 00:12:21.449 "cntlid": 65, 00:12:21.449 "qid": 0, 00:12:21.449 "state": "enabled", 00:12:21.449 "thread": "nvmf_tgt_poll_group_000", 00:12:21.449 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:21.449 "listen_address": { 00:12:21.449 "trtype": "TCP", 00:12:21.449 "adrfam": "IPv4", 00:12:21.449 "traddr": "10.0.0.3", 00:12:21.449 "trsvcid": "4420" 00:12:21.449 }, 00:12:21.449 "peer_address": { 00:12:21.449 "trtype": "TCP", 00:12:21.449 "adrfam": "IPv4", 00:12:21.449 "traddr": "10.0.0.1", 00:12:21.449 "trsvcid": "56184" 00:12:21.449 }, 00:12:21.449 "auth": { 00:12:21.449 "state": "completed", 00:12:21.449 "digest": "sha384", 00:12:21.449 "dhgroup": "ffdhe3072" 00:12:21.449 } 00:12:21.449 } 00:12:21.449 ]' 00:12:21.449 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.449 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.449 16:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.449 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:21.449 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.449 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.449 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.449 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.708 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:21.708 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:22.645 16:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:22.646 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:23.216 00:12:23.216 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.216 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.216 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.216 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.216 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.216 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.216 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.216 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.216 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.216 { 00:12:23.216 "cntlid": 67, 00:12:23.216 "qid": 0, 00:12:23.216 "state": "enabled", 00:12:23.216 "thread": "nvmf_tgt_poll_group_000", 00:12:23.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:23.216 "listen_address": { 00:12:23.216 "trtype": "TCP", 00:12:23.216 "adrfam": "IPv4", 00:12:23.216 "traddr": "10.0.0.3", 00:12:23.216 "trsvcid": "4420" 00:12:23.216 }, 00:12:23.216 "peer_address": { 00:12:23.216 "trtype": "TCP", 00:12:23.216 "adrfam": "IPv4", 00:12:23.216 "traddr": "10.0.0.1", 00:12:23.216 "trsvcid": "56196" 00:12:23.216 }, 00:12:23.216 "auth": { 00:12:23.216 "state": "completed", 00:12:23.216 "digest": "sha384", 00:12:23.216 "dhgroup": "ffdhe3072" 00:12:23.216 } 00:12:23.216 } 00:12:23.216 ]' 00:12:23.216 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.475 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.475 16:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.475 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:23.475 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.475 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.475 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.475 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.734 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:23.734 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:24.303 16:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.303 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:24.303 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.303 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:24.562 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:25.130 00:12:25.130 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.130 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.130 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.389 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.389 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.389 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.389 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.389 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.389 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.389 { 00:12:25.389 "cntlid": 69, 00:12:25.389 "qid": 0, 00:12:25.389 "state": "enabled", 00:12:25.389 "thread": "nvmf_tgt_poll_group_000", 00:12:25.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:25.389 "listen_address": { 00:12:25.389 "trtype": "TCP", 00:12:25.389 "adrfam": "IPv4", 00:12:25.389 "traddr": "10.0.0.3", 00:12:25.389 "trsvcid": "4420" 00:12:25.389 }, 00:12:25.389 "peer_address": { 00:12:25.389 "trtype": "TCP", 00:12:25.389 "adrfam": "IPv4", 00:12:25.389 "traddr": "10.0.0.1", 00:12:25.389 "trsvcid": "55540" 00:12:25.389 }, 00:12:25.389 "auth": { 00:12:25.389 "state": "completed", 00:12:25.389 "digest": "sha384", 00:12:25.389 "dhgroup": "ffdhe3072" 00:12:25.389 } 00:12:25.390 } 00:12:25.390 ]' 00:12:25.390 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.390 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.390 16:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.390 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:25.390 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.649 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.649 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.649 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.649 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:25.649 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:26.586 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.586 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:26.586 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.586 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.586 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.586 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.586 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:26.586 16:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:26.586 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:26.586 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.586 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:26.586 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:26.586 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:26.586 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.587 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:12:26.587 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.587 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.846 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.846 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:26.846 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:26.846 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:27.104 00:12:27.104 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.104 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.104 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.362 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.362 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.362 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.362 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.362 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.362 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.362 { 00:12:27.362 "cntlid": 71, 00:12:27.362 "qid": 0, 00:12:27.362 "state": "enabled", 00:12:27.362 "thread": "nvmf_tgt_poll_group_000", 00:12:27.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:27.362 "listen_address": { 00:12:27.362 "trtype": "TCP", 00:12:27.362 "adrfam": "IPv4", 00:12:27.362 "traddr": "10.0.0.3", 00:12:27.362 "trsvcid": "4420" 00:12:27.363 }, 00:12:27.363 "peer_address": { 00:12:27.363 "trtype": "TCP", 00:12:27.363 "adrfam": "IPv4", 00:12:27.363 "traddr": "10.0.0.1", 00:12:27.363 "trsvcid": "55556" 00:12:27.363 }, 00:12:27.363 "auth": { 00:12:27.363 "state": "completed", 00:12:27.363 "digest": "sha384", 00:12:27.363 "dhgroup": "ffdhe3072" 00:12:27.363 } 00:12:27.363 } 00:12:27.363 ]' 00:12:27.363 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.363 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.363 16:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.363 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:27.363 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.621 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.621 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.621 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.879 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:27.879 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:28.447 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.447 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.447 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:28.447 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.447 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.447 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.447 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:28.447 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.447 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:28.447 16:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.706 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:28.965 00:12:28.965 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:28.965 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:28.965 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.224 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.224 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.224 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.224 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.224 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.224 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.224 { 00:12:29.224 "cntlid": 73, 00:12:29.224 "qid": 0, 00:12:29.224 "state": "enabled", 00:12:29.224 "thread": "nvmf_tgt_poll_group_000", 00:12:29.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:29.224 "listen_address": { 00:12:29.224 "trtype": "TCP", 00:12:29.224 "adrfam": "IPv4", 00:12:29.224 "traddr": "10.0.0.3", 00:12:29.224 "trsvcid": "4420" 00:12:29.224 }, 00:12:29.224 "peer_address": { 00:12:29.224 "trtype": "TCP", 00:12:29.224 "adrfam": "IPv4", 00:12:29.224 "traddr": "10.0.0.1", 00:12:29.224 "trsvcid": "55580" 00:12:29.224 }, 00:12:29.224 "auth": { 00:12:29.224 "state": "completed", 00:12:29.224 "digest": "sha384", 00:12:29.224 "dhgroup": "ffdhe4096" 00:12:29.224 } 00:12:29.224 } 00:12:29.224 ]' 00:12:29.224 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.482 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.482 16:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.483 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:29.483 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.483 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.483 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.483 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.742 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:29.742 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:30.309 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.309 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:30.309 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.309 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.309 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.309 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.309 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:30.309 16:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:30.567 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:30.568 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.568 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:30.568 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:30.568 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:30.568 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.568 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.568 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.568 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.826 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.826 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.826 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:30.826 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:31.084 00:12:31.084 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.084 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.084 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.343 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.343 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.343 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.343 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.343 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.343 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.343 { 00:12:31.343 "cntlid": 75, 00:12:31.343 "qid": 0, 00:12:31.343 "state": "enabled", 00:12:31.343 "thread": "nvmf_tgt_poll_group_000", 00:12:31.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:31.343 "listen_address": { 00:12:31.343 "trtype": "TCP", 00:12:31.343 "adrfam": "IPv4", 00:12:31.343 "traddr": "10.0.0.3", 00:12:31.343 "trsvcid": "4420" 00:12:31.343 }, 00:12:31.343 "peer_address": { 00:12:31.343 "trtype": "TCP", 00:12:31.343 "adrfam": "IPv4", 00:12:31.343 "traddr": "10.0.0.1", 00:12:31.343 "trsvcid": "55616" 00:12:31.343 }, 00:12:31.343 "auth": { 00:12:31.343 "state": "completed", 00:12:31.343 "digest": "sha384", 00:12:31.343 "dhgroup": "ffdhe4096" 00:12:31.343 } 00:12:31.343 } 00:12:31.343 ]' 00:12:31.343 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.343 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.343 16:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.343 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:31.343 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.601 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.601 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.601 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.858 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:31.858 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:32.425 16:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.425 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:32.425 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.425 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.425 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.425 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.425 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:32.425 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:32.682 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:33.249 00:12:33.249 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:33.249 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:33.249 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.507 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.507 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.507 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.507 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.507 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.507 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.507 { 00:12:33.507 "cntlid": 77, 00:12:33.507 "qid": 0, 00:12:33.507 "state": "enabled", 00:12:33.507 "thread": "nvmf_tgt_poll_group_000", 00:12:33.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:33.507 "listen_address": { 00:12:33.507 "trtype": "TCP", 00:12:33.507 "adrfam": "IPv4", 00:12:33.507 "traddr": "10.0.0.3", 00:12:33.507 "trsvcid": "4420" 00:12:33.507 }, 00:12:33.507 "peer_address": { 00:12:33.507 "trtype": "TCP", 00:12:33.507 "adrfam": "IPv4", 00:12:33.507 "traddr": "10.0.0.1", 00:12:33.507 "trsvcid": "55636" 00:12:33.507 }, 00:12:33.507 "auth": { 00:12:33.507 "state": "completed", 00:12:33.507 "digest": "sha384", 00:12:33.507 "dhgroup": "ffdhe4096" 00:12:33.507 } 00:12:33.507 } 00:12:33.507 ]' 00:12:33.507 16:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.507 16:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.507 16:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.507 16:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:33.507 16:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.507 16:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.507 16:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.507 16:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.765 16:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:33.765 16:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:34.701 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:35.272 00:12:35.272 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.272 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.272 16:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.536 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.536 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.536 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.536 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.536 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.536 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.536 { 00:12:35.536 "cntlid": 79, 00:12:35.536 "qid": 0, 00:12:35.536 "state": "enabled", 00:12:35.536 "thread": "nvmf_tgt_poll_group_000", 00:12:35.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:35.536 "listen_address": { 00:12:35.536 "trtype": "TCP", 00:12:35.536 "adrfam": "IPv4", 00:12:35.536 "traddr": "10.0.0.3", 00:12:35.536 "trsvcid": "4420" 00:12:35.536 }, 00:12:35.536 "peer_address": { 00:12:35.536 "trtype": "TCP", 00:12:35.536 "adrfam": "IPv4", 00:12:35.536 "traddr": "10.0.0.1", 00:12:35.536 "trsvcid": "59514" 00:12:35.536 }, 00:12:35.536 "auth": { 00:12:35.536 "state": "completed", 00:12:35.536 "digest": "sha384", 00:12:35.536 "dhgroup": "ffdhe4096" 00:12:35.536 } 00:12:35.536 } 00:12:35.536 ]' 00:12:35.536 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.536 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.536 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.795 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:35.795 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.795 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.795 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.795 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:36.055 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:36.055 16:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:36.624 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.624 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:36.624 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.624 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.624 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.624 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:36.624 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.624 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:36.624 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:36.884 16:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:37.452 00:12:37.452 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.452 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.452 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.711 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.711 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.711 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.711 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.711 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.711 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.711 { 00:12:37.711 "cntlid": 81, 00:12:37.711 "qid": 0, 00:12:37.711 "state": "enabled", 00:12:37.711 "thread": "nvmf_tgt_poll_group_000", 00:12:37.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:37.711 "listen_address": { 00:12:37.711 "trtype": "TCP", 00:12:37.711 "adrfam": "IPv4", 00:12:37.711 "traddr": "10.0.0.3", 00:12:37.711 "trsvcid": "4420" 00:12:37.711 }, 00:12:37.711 "peer_address": { 00:12:37.711 "trtype": "TCP", 00:12:37.711 "adrfam": "IPv4", 00:12:37.711 "traddr": "10.0.0.1", 00:12:37.711 "trsvcid": "59530" 00:12:37.711 }, 00:12:37.711 "auth": { 00:12:37.711 "state": "completed", 00:12:37.711 "digest": "sha384", 00:12:37.711 "dhgroup": "ffdhe6144" 00:12:37.711 } 00:12:37.711 } 00:12:37.711 ]' 00:12:37.711 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.711 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:37.711 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.970 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:37.970 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.970 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.970 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.970 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:38.229 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:38.230 16:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:38.798 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.798 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:38.798 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.798 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.798 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.798 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.798 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:38.798 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.057 16:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:39.626 00:12:39.626 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.626 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.626 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.626 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.626 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.626 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.626 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.626 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.626 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.626 { 00:12:39.626 "cntlid": 83, 00:12:39.626 "qid": 0, 00:12:39.626 "state": "enabled", 00:12:39.626 "thread": "nvmf_tgt_poll_group_000", 00:12:39.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:39.626 "listen_address": { 00:12:39.626 "trtype": "TCP", 00:12:39.626 "adrfam": "IPv4", 00:12:39.626 "traddr": "10.0.0.3", 00:12:39.626 "trsvcid": "4420" 00:12:39.626 }, 00:12:39.626 "peer_address": { 00:12:39.626 "trtype": "TCP", 00:12:39.626 "adrfam": "IPv4", 00:12:39.626 "traddr": "10.0.0.1", 00:12:39.626 "trsvcid": "59568" 00:12:39.626 }, 00:12:39.626 "auth": { 00:12:39.626 "state": "completed", 00:12:39.626 "digest": "sha384", 00:12:39.626 "dhgroup": "ffdhe6144" 00:12:39.626 } 00:12:39.626 } 00:12:39.626 ]' 00:12:39.885 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.885 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:39.885 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.885 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:39.885 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.886 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.886 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.886 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:40.145 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:40.145 16:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:40.714 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.714 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:40.714 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.714 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.714 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.714 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.714 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:40.714 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:40.972 16:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:41.540 00:12:41.540 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:41.540 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:41.540 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.799 { 00:12:41.799 "cntlid": 85, 00:12:41.799 "qid": 0, 00:12:41.799 "state": "enabled", 00:12:41.799 "thread": "nvmf_tgt_poll_group_000", 00:12:41.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:41.799 "listen_address": { 00:12:41.799 "trtype": "TCP", 00:12:41.799 "adrfam": "IPv4", 00:12:41.799 "traddr": "10.0.0.3", 00:12:41.799 "trsvcid": "4420" 00:12:41.799 }, 00:12:41.799 "peer_address": { 00:12:41.799 "trtype": "TCP", 00:12:41.799 "adrfam": "IPv4", 00:12:41.799 "traddr": "10.0.0.1", 00:12:41.799 "trsvcid": "59598" 00:12:41.799 }, 00:12:41.799 "auth": { 00:12:41.799 "state": "completed", 00:12:41.799 "digest": "sha384", 00:12:41.799 "dhgroup": "ffdhe6144" 00:12:41.799 } 00:12:41.799 } 00:12:41.799 ]' 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.799 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:42.058 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:42.058 16:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:42.626 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.884 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:42.884 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.884 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.884 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.884 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.884 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:42.884 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:43.186 16:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:43.445 00:12:43.445 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.445 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.445 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.704 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.704 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.704 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.704 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.704 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.704 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.704 { 00:12:43.704 "cntlid": 87, 00:12:43.704 "qid": 0, 00:12:43.704 "state": "enabled", 00:12:43.704 "thread": "nvmf_tgt_poll_group_000", 00:12:43.704 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:43.704 "listen_address": { 00:12:43.704 "trtype": "TCP", 00:12:43.704 "adrfam": "IPv4", 00:12:43.704 "traddr": "10.0.0.3", 00:12:43.704 "trsvcid": "4420" 00:12:43.704 }, 00:12:43.704 "peer_address": { 00:12:43.704 "trtype": "TCP", 00:12:43.704 "adrfam": "IPv4", 00:12:43.704 "traddr": "10.0.0.1", 00:12:43.704 "trsvcid": "56650" 00:12:43.704 }, 00:12:43.704 "auth": { 00:12:43.704 "state": "completed", 00:12:43.704 "digest": "sha384", 00:12:43.704 "dhgroup": "ffdhe6144" 00:12:43.704 } 00:12:43.704 } 00:12:43.704 ]' 00:12:43.704 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.964 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.964 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.964 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:43.964 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.964 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.964 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.964 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:44.223 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:44.223 16:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:44.791 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.791 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:44.791 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.791 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.791 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.791 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:44.791 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.791 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:44.791 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.360 16:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:45.928 00:12:45.928 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.928 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.928 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:46.188 { 00:12:46.188 "cntlid": 89, 00:12:46.188 "qid": 0, 00:12:46.188 "state": "enabled", 00:12:46.188 "thread": "nvmf_tgt_poll_group_000", 00:12:46.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:46.188 "listen_address": { 00:12:46.188 "trtype": "TCP", 00:12:46.188 "adrfam": "IPv4", 00:12:46.188 "traddr": "10.0.0.3", 00:12:46.188 "trsvcid": "4420" 00:12:46.188 }, 00:12:46.188 "peer_address": { 00:12:46.188 "trtype": "TCP", 00:12:46.188 "adrfam": "IPv4", 00:12:46.188 "traddr": "10.0.0.1", 00:12:46.188 "trsvcid": "56678" 00:12:46.188 }, 00:12:46.188 "auth": { 00:12:46.188 "state": "completed", 00:12:46.188 "digest": "sha384", 00:12:46.188 "dhgroup": "ffdhe8192" 00:12:46.188 } 00:12:46.188 } 00:12:46.188 ]' 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:46.188 16:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.447 16:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:46.447 16:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:47.385 16:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:47.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:47.385 16:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:47.385 16:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.385 16:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.385 16:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.385 16:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:47.385 16:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:47.385 16:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.385 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:47.953 00:12:47.953 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.953 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.953 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:48.522 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:48.522 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:48.522 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.522 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.522 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.522 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:48.522 { 00:12:48.522 "cntlid": 91, 00:12:48.522 "qid": 0, 00:12:48.522 "state": "enabled", 00:12:48.522 "thread": "nvmf_tgt_poll_group_000", 00:12:48.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:48.522 "listen_address": { 00:12:48.522 "trtype": "TCP", 00:12:48.522 "adrfam": "IPv4", 00:12:48.522 "traddr": "10.0.0.3", 00:12:48.522 "trsvcid": "4420" 00:12:48.522 }, 00:12:48.522 "peer_address": { 00:12:48.522 "trtype": "TCP", 00:12:48.522 "adrfam": "IPv4", 00:12:48.522 "traddr": "10.0.0.1", 00:12:48.522 "trsvcid": "56702" 00:12:48.522 }, 00:12:48.522 "auth": { 00:12:48.522 "state": "completed", 00:12:48.522 "digest": "sha384", 00:12:48.522 "dhgroup": "ffdhe8192" 00:12:48.522 } 00:12:48.522 } 00:12:48.522 ]' 00:12:48.522 16:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:48.522 16:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:48.522 16:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:48.522 16:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:48.522 16:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:48.522 16:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:48.522 16:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:48.522 16:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.781 16:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:48.781 16:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:49.720 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:49.720 16:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:50.665 00:12:50.665 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:50.665 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:50.665 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:50.665 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:50.665 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:50.665 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.665 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.665 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.665 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:50.665 { 00:12:50.665 "cntlid": 93, 00:12:50.665 "qid": 0, 00:12:50.665 "state": "enabled", 00:12:50.665 "thread": "nvmf_tgt_poll_group_000", 00:12:50.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:50.665 "listen_address": { 00:12:50.665 "trtype": "TCP", 00:12:50.665 "adrfam": "IPv4", 00:12:50.665 "traddr": "10.0.0.3", 00:12:50.665 "trsvcid": "4420" 00:12:50.665 }, 00:12:50.665 "peer_address": { 00:12:50.665 "trtype": "TCP", 00:12:50.665 "adrfam": "IPv4", 00:12:50.665 "traddr": "10.0.0.1", 00:12:50.665 "trsvcid": "56732" 00:12:50.665 }, 00:12:50.665 "auth": { 00:12:50.665 "state": "completed", 00:12:50.665 "digest": "sha384", 00:12:50.665 "dhgroup": "ffdhe8192" 00:12:50.665 } 00:12:50.665 } 00:12:50.665 ]' 00:12:50.665 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.924 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:50.924 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.924 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:50.924 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.924 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.924 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.924 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:51.183 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:51.183 16:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:51.751 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:51.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:51.751 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:51.751 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.751 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.751 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.751 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:51.751 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:51.751 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:52.010 16:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:52.579 00:12:52.579 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:52.579 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.579 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:53.146 { 00:12:53.146 "cntlid": 95, 00:12:53.146 "qid": 0, 00:12:53.146 "state": "enabled", 00:12:53.146 "thread": "nvmf_tgt_poll_group_000", 00:12:53.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:53.146 "listen_address": { 00:12:53.146 "trtype": "TCP", 00:12:53.146 "adrfam": "IPv4", 00:12:53.146 "traddr": "10.0.0.3", 00:12:53.146 "trsvcid": "4420" 00:12:53.146 }, 00:12:53.146 "peer_address": { 00:12:53.146 "trtype": "TCP", 00:12:53.146 "adrfam": "IPv4", 00:12:53.146 "traddr": "10.0.0.1", 00:12:53.146 "trsvcid": "56746" 00:12:53.146 }, 00:12:53.146 "auth": { 00:12:53.146 "state": "completed", 00:12:53.146 "digest": "sha384", 00:12:53.146 "dhgroup": "ffdhe8192" 00:12:53.146 } 00:12:53.146 } 00:12:53.146 ]' 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:53.146 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:53.404 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:53.404 16:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:12:53.973 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.974 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:53.974 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.974 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.974 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.974 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:53.974 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:53.974 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.974 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:53.974 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.233 16:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:54.802 00:12:54.802 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.802 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.802 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.802 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.802 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.802 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.802 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.061 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.061 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:55.061 { 00:12:55.061 "cntlid": 97, 00:12:55.061 "qid": 0, 00:12:55.061 "state": "enabled", 00:12:55.061 "thread": "nvmf_tgt_poll_group_000", 00:12:55.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:55.061 "listen_address": { 00:12:55.061 "trtype": "TCP", 00:12:55.061 "adrfam": "IPv4", 00:12:55.061 "traddr": "10.0.0.3", 00:12:55.061 "trsvcid": "4420" 00:12:55.061 }, 00:12:55.061 "peer_address": { 00:12:55.061 "trtype": "TCP", 00:12:55.061 "adrfam": "IPv4", 00:12:55.061 "traddr": "10.0.0.1", 00:12:55.061 "trsvcid": "35722" 00:12:55.061 }, 00:12:55.061 "auth": { 00:12:55.061 "state": "completed", 00:12:55.061 "digest": "sha512", 00:12:55.061 "dhgroup": "null" 00:12:55.061 } 00:12:55.061 } 00:12:55.061 ]' 00:12:55.061 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:55.061 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:55.061 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:55.061 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:55.061 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:55.061 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:55.061 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:55.061 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:55.321 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:55.321 16:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:12:55.888 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.888 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:55.888 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.888 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.888 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.888 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.888 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:55.889 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.148 16:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:56.716 00:12:56.716 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.716 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.716 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.716 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.976 { 00:12:56.976 "cntlid": 99, 00:12:56.976 "qid": 0, 00:12:56.976 "state": "enabled", 00:12:56.976 "thread": "nvmf_tgt_poll_group_000", 00:12:56.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:56.976 "listen_address": { 00:12:56.976 "trtype": "TCP", 00:12:56.976 "adrfam": "IPv4", 00:12:56.976 "traddr": "10.0.0.3", 00:12:56.976 "trsvcid": "4420" 00:12:56.976 }, 00:12:56.976 "peer_address": { 00:12:56.976 "trtype": "TCP", 00:12:56.976 "adrfam": "IPv4", 00:12:56.976 "traddr": "10.0.0.1", 00:12:56.976 "trsvcid": "35752" 00:12:56.976 }, 00:12:56.976 "auth": { 00:12:56.976 "state": "completed", 00:12:56.976 "digest": "sha512", 00:12:56.976 "dhgroup": "null" 00:12:56.976 } 00:12:56.976 } 00:12:56.976 ]' 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:56.976 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.235 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:57.235 16:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:58.173 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.433 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.433 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.433 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.433 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.433 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.433 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.433 16:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:58.692 00:12:58.692 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.692 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.692 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.951 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.951 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.951 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.951 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.951 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.951 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.951 { 00:12:58.951 "cntlid": 101, 00:12:58.951 "qid": 0, 00:12:58.951 "state": "enabled", 00:12:58.951 "thread": "nvmf_tgt_poll_group_000", 00:12:58.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:12:58.952 "listen_address": { 00:12:58.952 "trtype": "TCP", 00:12:58.952 "adrfam": "IPv4", 00:12:58.952 "traddr": "10.0.0.3", 00:12:58.952 "trsvcid": "4420" 00:12:58.952 }, 00:12:58.952 "peer_address": { 00:12:58.952 "trtype": "TCP", 00:12:58.952 "adrfam": "IPv4", 00:12:58.952 "traddr": "10.0.0.1", 00:12:58.952 "trsvcid": "35784" 00:12:58.952 }, 00:12:58.952 "auth": { 00:12:58.952 "state": "completed", 00:12:58.952 "digest": "sha512", 00:12:58.952 "dhgroup": "null" 00:12:58.952 } 00:12:58.952 } 00:12:58.952 ]' 00:12:58.952 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.952 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.952 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.952 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:58.952 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:58.952 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:58.952 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:58.952 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.211 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:59.211 16:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:12:59.778 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.778 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:12:59.778 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.778 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.778 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.778 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.778 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:59.778 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:00.037 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:00.037 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.037 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.037 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:00.037 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:00.037 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.037 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:13:00.037 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.037 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.296 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.296 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:00.296 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.296 16:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:00.555 00:13:00.555 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.555 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.555 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.815 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.815 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.815 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.815 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.815 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.815 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.815 { 00:13:00.815 "cntlid": 103, 00:13:00.815 "qid": 0, 00:13:00.815 "state": "enabled", 00:13:00.815 "thread": "nvmf_tgt_poll_group_000", 00:13:00.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:00.815 "listen_address": { 00:13:00.815 "trtype": "TCP", 00:13:00.815 "adrfam": "IPv4", 00:13:00.815 "traddr": "10.0.0.3", 00:13:00.815 "trsvcid": "4420" 00:13:00.815 }, 00:13:00.815 "peer_address": { 00:13:00.815 "trtype": "TCP", 00:13:00.815 "adrfam": "IPv4", 00:13:00.815 "traddr": "10.0.0.1", 00:13:00.815 "trsvcid": "35804" 00:13:00.815 }, 00:13:00.815 "auth": { 00:13:00.815 "state": "completed", 00:13:00.815 "digest": "sha512", 00:13:00.815 "dhgroup": "null" 00:13:00.815 } 00:13:00.815 } 00:13:00.815 ]' 00:13:00.815 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.815 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.815 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.815 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:00.815 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:01.074 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:01.074 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:01.074 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.334 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:01.334 16:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:01.902 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.902 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:01.902 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.902 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.902 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.902 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:01.902 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.902 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:01.902 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.161 16:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:02.420 00:13:02.420 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.420 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.420 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.988 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.988 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.988 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.989 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.989 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.989 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.989 { 00:13:02.989 "cntlid": 105, 00:13:02.989 "qid": 0, 00:13:02.989 "state": "enabled", 00:13:02.989 "thread": "nvmf_tgt_poll_group_000", 00:13:02.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:02.989 "listen_address": { 00:13:02.989 "trtype": "TCP", 00:13:02.989 "adrfam": "IPv4", 00:13:02.989 "traddr": "10.0.0.3", 00:13:02.989 "trsvcid": "4420" 00:13:02.989 }, 00:13:02.989 "peer_address": { 00:13:02.989 "trtype": "TCP", 00:13:02.989 "adrfam": "IPv4", 00:13:02.989 "traddr": "10.0.0.1", 00:13:02.989 "trsvcid": "35832" 00:13:02.989 }, 00:13:02.989 "auth": { 00:13:02.989 "state": "completed", 00:13:02.989 "digest": "sha512", 00:13:02.989 "dhgroup": "ffdhe2048" 00:13:02.989 } 00:13:02.989 } 00:13:02.989 ]' 00:13:02.989 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.989 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.989 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.989 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:02.989 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.989 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.989 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.989 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.248 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:03.248 16:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:03.817 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.817 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:03.817 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.817 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.817 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.817 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:04.076 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:04.076 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:04.346 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:04.346 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:04.346 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:04.346 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:04.346 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:04.346 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:04.347 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.347 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.347 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.347 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.347 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.347 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.347 16:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:04.619 00:13:04.619 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.619 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.619 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.879 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.879 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.879 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.879 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.879 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.879 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.879 { 00:13:04.879 "cntlid": 107, 00:13:04.879 "qid": 0, 00:13:04.879 "state": "enabled", 00:13:04.879 "thread": "nvmf_tgt_poll_group_000", 00:13:04.879 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:04.879 "listen_address": { 00:13:04.879 "trtype": "TCP", 00:13:04.879 "adrfam": "IPv4", 00:13:04.879 "traddr": "10.0.0.3", 00:13:04.879 "trsvcid": "4420" 00:13:04.879 }, 00:13:04.879 "peer_address": { 00:13:04.879 "trtype": "TCP", 00:13:04.879 "adrfam": "IPv4", 00:13:04.879 "traddr": "10.0.0.1", 00:13:04.879 "trsvcid": "51906" 00:13:04.879 }, 00:13:04.879 "auth": { 00:13:04.879 "state": "completed", 00:13:04.879 "digest": "sha512", 00:13:04.879 "dhgroup": "ffdhe2048" 00:13:04.879 } 00:13:04.879 } 00:13:04.879 ]' 00:13:04.879 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.879 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.879 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.879 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:04.879 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:05.138 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:05.138 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:05.138 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.397 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:13:05.397 16:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:13:05.964 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.964 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:05.964 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.964 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.964 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.964 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.964 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:05.964 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.223 16:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:06.482 00:13:06.482 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.482 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.482 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.741 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.741 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.741 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.741 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.741 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.741 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.741 { 00:13:06.741 "cntlid": 109, 00:13:06.741 "qid": 0, 00:13:06.741 "state": "enabled", 00:13:06.741 "thread": "nvmf_tgt_poll_group_000", 00:13:06.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:06.741 "listen_address": { 00:13:06.741 "trtype": "TCP", 00:13:06.741 "adrfam": "IPv4", 00:13:06.741 "traddr": "10.0.0.3", 00:13:06.741 "trsvcid": "4420" 00:13:06.741 }, 00:13:06.741 "peer_address": { 00:13:06.741 "trtype": "TCP", 00:13:06.741 "adrfam": "IPv4", 00:13:06.741 "traddr": "10.0.0.1", 00:13:06.741 "trsvcid": "51936" 00:13:06.741 }, 00:13:06.741 "auth": { 00:13:06.741 "state": "completed", 00:13:06.741 "digest": "sha512", 00:13:06.741 "dhgroup": "ffdhe2048" 00:13:06.741 } 00:13:06.741 } 00:13:06.741 ]' 00:13:06.741 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:07.000 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:07.000 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:07.000 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:07.000 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:07.000 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:07.000 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:07.000 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:07.259 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:13:07.259 16:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:13:07.827 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.827 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:07.827 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.827 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.086 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.086 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:08.086 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:08.086 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.345 16:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:08.605 00:13:08.605 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.605 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.605 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.864 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.864 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.864 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.864 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.864 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.864 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.864 { 00:13:08.864 "cntlid": 111, 00:13:08.864 "qid": 0, 00:13:08.864 "state": "enabled", 00:13:08.864 "thread": "nvmf_tgt_poll_group_000", 00:13:08.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:08.864 "listen_address": { 00:13:08.864 "trtype": "TCP", 00:13:08.864 "adrfam": "IPv4", 00:13:08.864 "traddr": "10.0.0.3", 00:13:08.864 "trsvcid": "4420" 00:13:08.864 }, 00:13:08.864 "peer_address": { 00:13:08.864 "trtype": "TCP", 00:13:08.864 "adrfam": "IPv4", 00:13:08.864 "traddr": "10.0.0.1", 00:13:08.864 "trsvcid": "51964" 00:13:08.864 }, 00:13:08.864 "auth": { 00:13:08.864 "state": "completed", 00:13:08.864 "digest": "sha512", 00:13:08.864 "dhgroup": "ffdhe2048" 00:13:08.864 } 00:13:08.864 } 00:13:08.864 ]' 00:13:08.864 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.864 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.864 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.864 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:08.864 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:09.123 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:09.123 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:09.123 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:09.383 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:09.383 16:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:09.950 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.950 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:09.950 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.950 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.950 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.950 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:09.950 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.950 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:09.950 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.209 16:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:10.468 00:13:10.468 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.468 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.468 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.727 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.727 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.727 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.727 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.727 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.727 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.727 { 00:13:10.727 "cntlid": 113, 00:13:10.727 "qid": 0, 00:13:10.727 "state": "enabled", 00:13:10.727 "thread": "nvmf_tgt_poll_group_000", 00:13:10.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:10.727 "listen_address": { 00:13:10.727 "trtype": "TCP", 00:13:10.727 "adrfam": "IPv4", 00:13:10.727 "traddr": "10.0.0.3", 00:13:10.727 "trsvcid": "4420" 00:13:10.727 }, 00:13:10.727 "peer_address": { 00:13:10.727 "trtype": "TCP", 00:13:10.727 "adrfam": "IPv4", 00:13:10.727 "traddr": "10.0.0.1", 00:13:10.727 "trsvcid": "52000" 00:13:10.727 }, 00:13:10.727 "auth": { 00:13:10.727 "state": "completed", 00:13:10.727 "digest": "sha512", 00:13:10.727 "dhgroup": "ffdhe3072" 00:13:10.727 } 00:13:10.727 } 00:13:10.727 ]' 00:13:10.727 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.727 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.727 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.986 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:10.986 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.986 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.986 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.986 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.245 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:11.245 16:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:11.813 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.813 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:11.813 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.813 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.813 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.813 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.813 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:11.813 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.073 16:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:12.332 00:13:12.332 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.332 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.332 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.592 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.592 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.592 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.592 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.592 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.592 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.592 { 00:13:12.592 "cntlid": 115, 00:13:12.592 "qid": 0, 00:13:12.592 "state": "enabled", 00:13:12.592 "thread": "nvmf_tgt_poll_group_000", 00:13:12.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:12.592 "listen_address": { 00:13:12.592 "trtype": "TCP", 00:13:12.592 "adrfam": "IPv4", 00:13:12.592 "traddr": "10.0.0.3", 00:13:12.592 "trsvcid": "4420" 00:13:12.592 }, 00:13:12.592 "peer_address": { 00:13:12.592 "trtype": "TCP", 00:13:12.592 "adrfam": "IPv4", 00:13:12.592 "traddr": "10.0.0.1", 00:13:12.592 "trsvcid": "52024" 00:13:12.592 }, 00:13:12.592 "auth": { 00:13:12.592 "state": "completed", 00:13:12.592 "digest": "sha512", 00:13:12.592 "dhgroup": "ffdhe3072" 00:13:12.592 } 00:13:12.592 } 00:13:12.592 ]' 00:13:12.592 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.851 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.851 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.851 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:12.851 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.851 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.851 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.851 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.111 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:13:13.111 16:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:13:13.680 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.680 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:13.680 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.680 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.680 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.680 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.680 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:13.680 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:13.940 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:14.199 00:13:14.199 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.199 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.199 16:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.459 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.459 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.459 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.459 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.459 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.459 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.459 { 00:13:14.459 "cntlid": 117, 00:13:14.459 "qid": 0, 00:13:14.459 "state": "enabled", 00:13:14.459 "thread": "nvmf_tgt_poll_group_000", 00:13:14.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:14.459 "listen_address": { 00:13:14.459 "trtype": "TCP", 00:13:14.459 "adrfam": "IPv4", 00:13:14.459 "traddr": "10.0.0.3", 00:13:14.459 "trsvcid": "4420" 00:13:14.459 }, 00:13:14.459 "peer_address": { 00:13:14.459 "trtype": "TCP", 00:13:14.459 "adrfam": "IPv4", 00:13:14.459 "traddr": "10.0.0.1", 00:13:14.459 "trsvcid": "57336" 00:13:14.459 }, 00:13:14.459 "auth": { 00:13:14.459 "state": "completed", 00:13:14.459 "digest": "sha512", 00:13:14.459 "dhgroup": "ffdhe3072" 00:13:14.459 } 00:13:14.459 } 00:13:14.459 ]' 00:13:14.459 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.459 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.459 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.718 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:14.718 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.718 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.718 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.718 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:14.977 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:13:14.977 16:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:13:15.544 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.544 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:15.544 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.544 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.544 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.544 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.544 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:15.544 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:15.803 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:16.372 00:13:16.372 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.372 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.372 16:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.630 { 00:13:16.630 "cntlid": 119, 00:13:16.630 "qid": 0, 00:13:16.630 "state": "enabled", 00:13:16.630 "thread": "nvmf_tgt_poll_group_000", 00:13:16.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:16.630 "listen_address": { 00:13:16.630 "trtype": "TCP", 00:13:16.630 "adrfam": "IPv4", 00:13:16.630 "traddr": "10.0.0.3", 00:13:16.630 "trsvcid": "4420" 00:13:16.630 }, 00:13:16.630 "peer_address": { 00:13:16.630 "trtype": "TCP", 00:13:16.630 "adrfam": "IPv4", 00:13:16.630 "traddr": "10.0.0.1", 00:13:16.630 "trsvcid": "57362" 00:13:16.630 }, 00:13:16.630 "auth": { 00:13:16.630 "state": "completed", 00:13:16.630 "digest": "sha512", 00:13:16.630 "dhgroup": "ffdhe3072" 00:13:16.630 } 00:13:16.630 } 00:13:16.630 ]' 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.630 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:16.889 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:16.889 16:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:17.457 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.716 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:17.716 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.716 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.716 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.716 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:17.716 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.716 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:17.716 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:17.975 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:18.246 00:13:18.246 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.246 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.246 16:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.820 { 00:13:18.820 "cntlid": 121, 00:13:18.820 "qid": 0, 00:13:18.820 "state": "enabled", 00:13:18.820 "thread": "nvmf_tgt_poll_group_000", 00:13:18.820 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:18.820 "listen_address": { 00:13:18.820 "trtype": "TCP", 00:13:18.820 "adrfam": "IPv4", 00:13:18.820 "traddr": "10.0.0.3", 00:13:18.820 "trsvcid": "4420" 00:13:18.820 }, 00:13:18.820 "peer_address": { 00:13:18.820 "trtype": "TCP", 00:13:18.820 "adrfam": "IPv4", 00:13:18.820 "traddr": "10.0.0.1", 00:13:18.820 "trsvcid": "57394" 00:13:18.820 }, 00:13:18.820 "auth": { 00:13:18.820 "state": "completed", 00:13:18.820 "digest": "sha512", 00:13:18.820 "dhgroup": "ffdhe4096" 00:13:18.820 } 00:13:18.820 } 00:13:18.820 ]' 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.820 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.079 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:19.080 16:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:20.018 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:20.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:20.018 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:20.018 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.018 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.018 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.018 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:20.018 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.018 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.277 16:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:20.535 00:13:20.535 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.535 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.535 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.794 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.794 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.794 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.794 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.794 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.794 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.794 { 00:13:20.794 "cntlid": 123, 00:13:20.794 "qid": 0, 00:13:20.794 "state": "enabled", 00:13:20.794 "thread": "nvmf_tgt_poll_group_000", 00:13:20.794 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:20.794 "listen_address": { 00:13:20.794 "trtype": "TCP", 00:13:20.794 "adrfam": "IPv4", 00:13:20.794 "traddr": "10.0.0.3", 00:13:20.794 "trsvcid": "4420" 00:13:20.794 }, 00:13:20.794 "peer_address": { 00:13:20.794 "trtype": "TCP", 00:13:20.794 "adrfam": "IPv4", 00:13:20.794 "traddr": "10.0.0.1", 00:13:20.794 "trsvcid": "57428" 00:13:20.794 }, 00:13:20.794 "auth": { 00:13:20.794 "state": "completed", 00:13:20.794 "digest": "sha512", 00:13:20.794 "dhgroup": "ffdhe4096" 00:13:20.794 } 00:13:20.794 } 00:13:20.794 ]' 00:13:20.794 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.794 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:20.794 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:21.053 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:21.053 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:21.053 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:21.053 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:21.053 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.313 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:13:21.313 16:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:13:21.880 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.880 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:21.880 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.880 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.880 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.880 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.880 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:21.880 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:22.139 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:22.139 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.139 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.139 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:22.139 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:22.139 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.139 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.139 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.139 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.140 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.140 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.140 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.140 16:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:22.706 00:13:22.706 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.706 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.706 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.965 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.965 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.965 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.965 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.965 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.965 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.965 { 00:13:22.965 "cntlid": 125, 00:13:22.965 "qid": 0, 00:13:22.965 "state": "enabled", 00:13:22.965 "thread": "nvmf_tgt_poll_group_000", 00:13:22.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:22.965 "listen_address": { 00:13:22.965 "trtype": "TCP", 00:13:22.965 "adrfam": "IPv4", 00:13:22.965 "traddr": "10.0.0.3", 00:13:22.965 "trsvcid": "4420" 00:13:22.965 }, 00:13:22.965 "peer_address": { 00:13:22.965 "trtype": "TCP", 00:13:22.965 "adrfam": "IPv4", 00:13:22.965 "traddr": "10.0.0.1", 00:13:22.965 "trsvcid": "57454" 00:13:22.965 }, 00:13:22.965 "auth": { 00:13:22.965 "state": "completed", 00:13:22.965 "digest": "sha512", 00:13:22.965 "dhgroup": "ffdhe4096" 00:13:22.965 } 00:13:22.965 } 00:13:22.965 ]' 00:13:22.965 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.965 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:22.965 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.965 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:22.965 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:23.224 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:23.224 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:23.224 16:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.482 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:13:23.482 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:13:24.051 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:24.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:24.051 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:24.051 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.051 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.051 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.051 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:24.051 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:24.051 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.311 16:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:24.880 00:13:24.880 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.880 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.880 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:25.139 { 00:13:25.139 "cntlid": 127, 00:13:25.139 "qid": 0, 00:13:25.139 "state": "enabled", 00:13:25.139 "thread": "nvmf_tgt_poll_group_000", 00:13:25.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:25.139 "listen_address": { 00:13:25.139 "trtype": "TCP", 00:13:25.139 "adrfam": "IPv4", 00:13:25.139 "traddr": "10.0.0.3", 00:13:25.139 "trsvcid": "4420" 00:13:25.139 }, 00:13:25.139 "peer_address": { 00:13:25.139 "trtype": "TCP", 00:13:25.139 "adrfam": "IPv4", 00:13:25.139 "traddr": "10.0.0.1", 00:13:25.139 "trsvcid": "49124" 00:13:25.139 }, 00:13:25.139 "auth": { 00:13:25.139 "state": "completed", 00:13:25.139 "digest": "sha512", 00:13:25.139 "dhgroup": "ffdhe4096" 00:13:25.139 } 00:13:25.139 } 00:13:25.139 ]' 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:25.139 16:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.707 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:25.707 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:26.275 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:26.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:26.275 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:26.275 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.275 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.275 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.275 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:26.275 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:26.275 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.275 16:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:26.535 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:27.103 00:13:27.103 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:27.103 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:27.103 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:27.361 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:27.361 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:27.362 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.362 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.362 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.362 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:27.362 { 00:13:27.362 "cntlid": 129, 00:13:27.362 "qid": 0, 00:13:27.362 "state": "enabled", 00:13:27.362 "thread": "nvmf_tgt_poll_group_000", 00:13:27.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:27.362 "listen_address": { 00:13:27.362 "trtype": "TCP", 00:13:27.362 "adrfam": "IPv4", 00:13:27.362 "traddr": "10.0.0.3", 00:13:27.362 "trsvcid": "4420" 00:13:27.362 }, 00:13:27.362 "peer_address": { 00:13:27.362 "trtype": "TCP", 00:13:27.362 "adrfam": "IPv4", 00:13:27.362 "traddr": "10.0.0.1", 00:13:27.362 "trsvcid": "49142" 00:13:27.362 }, 00:13:27.362 "auth": { 00:13:27.362 "state": "completed", 00:13:27.362 "digest": "sha512", 00:13:27.362 "dhgroup": "ffdhe6144" 00:13:27.362 } 00:13:27.362 } 00:13:27.362 ]' 00:13:27.362 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:27.362 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:27.362 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:27.362 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:27.362 16:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:27.362 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:27.362 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:27.362 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.620 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:27.621 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:28.558 16:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:28.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:28.558 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:28.558 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.558 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.558 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.558 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:28.558 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:28.558 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:28.816 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:29.075 00:13:29.075 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:29.075 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:29.075 16:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:29.642 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:29.642 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:29.643 { 00:13:29.643 "cntlid": 131, 00:13:29.643 "qid": 0, 00:13:29.643 "state": "enabled", 00:13:29.643 "thread": "nvmf_tgt_poll_group_000", 00:13:29.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:29.643 "listen_address": { 00:13:29.643 "trtype": "TCP", 00:13:29.643 "adrfam": "IPv4", 00:13:29.643 "traddr": "10.0.0.3", 00:13:29.643 "trsvcid": "4420" 00:13:29.643 }, 00:13:29.643 "peer_address": { 00:13:29.643 "trtype": "TCP", 00:13:29.643 "adrfam": "IPv4", 00:13:29.643 "traddr": "10.0.0.1", 00:13:29.643 "trsvcid": "49174" 00:13:29.643 }, 00:13:29.643 "auth": { 00:13:29.643 "state": "completed", 00:13:29.643 "digest": "sha512", 00:13:29.643 "dhgroup": "ffdhe6144" 00:13:29.643 } 00:13:29.643 } 00:13:29.643 ]' 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:29.643 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.902 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:13:29.902 16:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:13:30.469 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:30.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:30.469 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:30.469 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.469 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.469 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.469 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:30.469 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:30.469 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.728 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:30.729 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:31.296 00:13:31.296 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:31.296 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:31.296 16:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:31.555 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:31.555 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:31.555 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.555 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.555 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.555 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:31.555 { 00:13:31.555 "cntlid": 133, 00:13:31.555 "qid": 0, 00:13:31.555 "state": "enabled", 00:13:31.555 "thread": "nvmf_tgt_poll_group_000", 00:13:31.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:31.555 "listen_address": { 00:13:31.555 "trtype": "TCP", 00:13:31.555 "adrfam": "IPv4", 00:13:31.555 "traddr": "10.0.0.3", 00:13:31.555 "trsvcid": "4420" 00:13:31.555 }, 00:13:31.555 "peer_address": { 00:13:31.555 "trtype": "TCP", 00:13:31.555 "adrfam": "IPv4", 00:13:31.555 "traddr": "10.0.0.1", 00:13:31.555 "trsvcid": "49192" 00:13:31.555 }, 00:13:31.555 "auth": { 00:13:31.555 "state": "completed", 00:13:31.555 "digest": "sha512", 00:13:31.555 "dhgroup": "ffdhe6144" 00:13:31.555 } 00:13:31.555 } 00:13:31.555 ]' 00:13:31.555 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.555 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.555 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.814 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:31.814 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.814 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.814 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.814 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.073 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:13:32.073 16:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:13:32.638 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:32.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:32.638 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:32.638 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.638 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.638 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.638 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:32.639 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:32.639 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:32.897 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:32.897 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.897 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:32.897 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:32.897 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:32.897 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.898 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:13:32.898 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.898 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.898 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.898 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:32.898 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:32.898 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:33.156 00:13:33.415 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:33.415 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:33.415 16:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:33.682 { 00:13:33.682 "cntlid": 135, 00:13:33.682 "qid": 0, 00:13:33.682 "state": "enabled", 00:13:33.682 "thread": "nvmf_tgt_poll_group_000", 00:13:33.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:33.682 "listen_address": { 00:13:33.682 "trtype": "TCP", 00:13:33.682 "adrfam": "IPv4", 00:13:33.682 "traddr": "10.0.0.3", 00:13:33.682 "trsvcid": "4420" 00:13:33.682 }, 00:13:33.682 "peer_address": { 00:13:33.682 "trtype": "TCP", 00:13:33.682 "adrfam": "IPv4", 00:13:33.682 "traddr": "10.0.0.1", 00:13:33.682 "trsvcid": "49222" 00:13:33.682 }, 00:13:33.682 "auth": { 00:13:33.682 "state": "completed", 00:13:33.682 "digest": "sha512", 00:13:33.682 "dhgroup": "ffdhe6144" 00:13:33.682 } 00:13:33.682 } 00:13:33.682 ]' 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.682 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.951 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:33.951 16:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:34.519 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.778 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:34.778 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.778 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.778 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.778 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:34.778 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.778 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:34.778 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.038 16:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:35.607 00:13:35.607 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:35.607 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.607 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:35.866 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.866 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.866 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.866 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.866 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.866 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.866 { 00:13:35.866 "cntlid": 137, 00:13:35.866 "qid": 0, 00:13:35.866 "state": "enabled", 00:13:35.866 "thread": "nvmf_tgt_poll_group_000", 00:13:35.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:35.866 "listen_address": { 00:13:35.866 "trtype": "TCP", 00:13:35.866 "adrfam": "IPv4", 00:13:35.866 "traddr": "10.0.0.3", 00:13:35.866 "trsvcid": "4420" 00:13:35.866 }, 00:13:35.866 "peer_address": { 00:13:35.866 "trtype": "TCP", 00:13:35.866 "adrfam": "IPv4", 00:13:35.866 "traddr": "10.0.0.1", 00:13:35.866 "trsvcid": "51352" 00:13:35.866 }, 00:13:35.866 "auth": { 00:13:35.866 "state": "completed", 00:13:35.866 "digest": "sha512", 00:13:35.866 "dhgroup": "ffdhe8192" 00:13:35.866 } 00:13:35.866 } 00:13:35.866 ]' 00:13:35.866 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.866 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.866 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.866 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:36.125 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.125 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.125 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.125 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:36.384 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:36.384 16:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:36.953 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:36.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:36.953 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:36.953 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.953 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.953 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.953 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.953 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:36.953 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:37.212 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.213 16:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:37.780 00:13:37.780 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:37.780 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:37.780 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.040 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.040 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.040 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.040 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.040 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.040 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:38.040 { 00:13:38.040 "cntlid": 139, 00:13:38.040 "qid": 0, 00:13:38.040 "state": "enabled", 00:13:38.040 "thread": "nvmf_tgt_poll_group_000", 00:13:38.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:38.040 "listen_address": { 00:13:38.040 "trtype": "TCP", 00:13:38.040 "adrfam": "IPv4", 00:13:38.040 "traddr": "10.0.0.3", 00:13:38.040 "trsvcid": "4420" 00:13:38.040 }, 00:13:38.040 "peer_address": { 00:13:38.040 "trtype": "TCP", 00:13:38.040 "adrfam": "IPv4", 00:13:38.040 "traddr": "10.0.0.1", 00:13:38.040 "trsvcid": "51366" 00:13:38.040 }, 00:13:38.040 "auth": { 00:13:38.040 "state": "completed", 00:13:38.040 "digest": "sha512", 00:13:38.040 "dhgroup": "ffdhe8192" 00:13:38.040 } 00:13:38.040 } 00:13:38.040 ]' 00:13:38.040 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:38.299 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:38.299 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:38.299 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:38.299 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:38.299 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:38.299 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:38.299 16:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:38.558 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:13:38.558 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: --dhchap-ctrl-secret DHHC-1:02:MTYyMDc1NzNkOGZkMTkyOTM1ZDIwNjAxM2E5ZWJjMzg4YzI0NGFkZWE0N2VjMmVin8TC/A==: 00:13:39.495 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.495 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:39.495 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.495 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.495 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.495 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:39.495 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:39.495 16:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:39.495 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:40.432 00:13:40.432 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.432 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.432 16:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.432 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.432 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.432 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.432 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.432 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.692 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.692 { 00:13:40.692 "cntlid": 141, 00:13:40.692 "qid": 0, 00:13:40.692 "state": "enabled", 00:13:40.692 "thread": "nvmf_tgt_poll_group_000", 00:13:40.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:40.692 "listen_address": { 00:13:40.692 "trtype": "TCP", 00:13:40.692 "adrfam": "IPv4", 00:13:40.692 "traddr": "10.0.0.3", 00:13:40.692 "trsvcid": "4420" 00:13:40.692 }, 00:13:40.692 "peer_address": { 00:13:40.692 "trtype": "TCP", 00:13:40.692 "adrfam": "IPv4", 00:13:40.692 "traddr": "10.0.0.1", 00:13:40.692 "trsvcid": "51400" 00:13:40.692 }, 00:13:40.692 "auth": { 00:13:40.692 "state": "completed", 00:13:40.692 "digest": "sha512", 00:13:40.692 "dhgroup": "ffdhe8192" 00:13:40.692 } 00:13:40.692 } 00:13:40.692 ]' 00:13:40.692 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:40.692 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:40.692 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:40.692 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:40.692 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:40.692 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:40.692 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:40.692 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.951 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:13:40.951 16:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:01:YTU4ZDBlNDljYTRmMjMyY2E1NjI5NTYyYTk5MDg0MTf4m95R: 00:13:41.519 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:41.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:41.519 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:41.519 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.519 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.519 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.519 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:41.519 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:41.519 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:41.778 16:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:42.346 00:13:42.604 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.604 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.604 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:42.862 { 00:13:42.862 "cntlid": 143, 00:13:42.862 "qid": 0, 00:13:42.862 "state": "enabled", 00:13:42.862 "thread": "nvmf_tgt_poll_group_000", 00:13:42.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:42.862 "listen_address": { 00:13:42.862 "trtype": "TCP", 00:13:42.862 "adrfam": "IPv4", 00:13:42.862 "traddr": "10.0.0.3", 00:13:42.862 "trsvcid": "4420" 00:13:42.862 }, 00:13:42.862 "peer_address": { 00:13:42.862 "trtype": "TCP", 00:13:42.862 "adrfam": "IPv4", 00:13:42.862 "traddr": "10.0.0.1", 00:13:42.862 "trsvcid": "51422" 00:13:42.862 }, 00:13:42.862 "auth": { 00:13:42.862 "state": "completed", 00:13:42.862 "digest": "sha512", 00:13:42.862 "dhgroup": "ffdhe8192" 00:13:42.862 } 00:13:42.862 } 00:13:42.862 ]' 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:42.862 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.121 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:43.121 16:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:44.058 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.058 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:44.058 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.058 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.058 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.058 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:44.058 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:44.058 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:44.058 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:44.058 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:44.058 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.317 16:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:44.885 00:13:44.885 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.885 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.885 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.144 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.144 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.144 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.144 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.144 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.144 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.144 { 00:13:45.144 "cntlid": 145, 00:13:45.144 "qid": 0, 00:13:45.144 "state": "enabled", 00:13:45.144 "thread": "nvmf_tgt_poll_group_000", 00:13:45.144 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:45.144 "listen_address": { 00:13:45.144 "trtype": "TCP", 00:13:45.144 "adrfam": "IPv4", 00:13:45.144 "traddr": "10.0.0.3", 00:13:45.144 "trsvcid": "4420" 00:13:45.144 }, 00:13:45.144 "peer_address": { 00:13:45.144 "trtype": "TCP", 00:13:45.144 "adrfam": "IPv4", 00:13:45.144 "traddr": "10.0.0.1", 00:13:45.144 "trsvcid": "56006" 00:13:45.144 }, 00:13:45.144 "auth": { 00:13:45.144 "state": "completed", 00:13:45.144 "digest": "sha512", 00:13:45.144 "dhgroup": "ffdhe8192" 00:13:45.144 } 00:13:45.144 } 00:13:45.144 ]' 00:13:45.144 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.144 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:45.144 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.402 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:45.402 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.402 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.402 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.402 16:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.660 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:45.661 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:00:ODdkNzI1YzhmNmJmN2Q2Y2M0MDVkYWQyYWNlZmUzMTA3OGVhMjY3MGQzYjYzOGVmVOJ5Eg==: --dhchap-ctrl-secret DHHC-1:03:MGE3MmQyNjQyNDE1ZTY4ZGQ3MzI0OTE4MWU4M2MzNDhjMDU3YzQ3MWY3ZTk3MmE5OTg3MjQwN2FhYWRkZWQ0NtWJCNQ=: 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:46.230 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:46.231 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.231 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:46.231 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.231 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:46.231 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:46.231 16:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:46.850 request: 00:13:46.850 { 00:13:46.850 "name": "nvme0", 00:13:46.850 "trtype": "tcp", 00:13:46.850 "traddr": "10.0.0.3", 00:13:46.850 "adrfam": "ipv4", 00:13:46.850 "trsvcid": "4420", 00:13:46.850 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:46.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:46.850 "prchk_reftag": false, 00:13:46.850 "prchk_guard": false, 00:13:46.850 "hdgst": false, 00:13:46.850 "ddgst": false, 00:13:46.850 "dhchap_key": "key2", 00:13:46.850 "allow_unrecognized_csi": false, 00:13:46.850 "method": "bdev_nvme_attach_controller", 00:13:46.850 "req_id": 1 00:13:46.850 } 00:13:46.850 Got JSON-RPC error response 00:13:46.850 response: 00:13:46.850 { 00:13:46.850 "code": -5, 00:13:46.850 "message": "Input/output error" 00:13:46.850 } 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:46.850 16:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:47.418 request: 00:13:47.418 { 00:13:47.418 "name": "nvme0", 00:13:47.418 "trtype": "tcp", 00:13:47.418 "traddr": "10.0.0.3", 00:13:47.418 "adrfam": "ipv4", 00:13:47.418 "trsvcid": "4420", 00:13:47.418 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:47.418 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:47.418 "prchk_reftag": false, 00:13:47.418 "prchk_guard": false, 00:13:47.418 "hdgst": false, 00:13:47.419 "ddgst": false, 00:13:47.419 "dhchap_key": "key1", 00:13:47.419 "dhchap_ctrlr_key": "ckey2", 00:13:47.419 "allow_unrecognized_csi": false, 00:13:47.419 "method": "bdev_nvme_attach_controller", 00:13:47.419 "req_id": 1 00:13:47.419 } 00:13:47.419 Got JSON-RPC error response 00:13:47.419 response: 00:13:47.419 { 00:13:47.419 "code": -5, 00:13:47.419 "message": "Input/output error" 00:13:47.419 } 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:47.677 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:48.245 request: 00:13:48.245 { 00:13:48.245 "name": "nvme0", 00:13:48.245 "trtype": "tcp", 00:13:48.245 "traddr": "10.0.0.3", 00:13:48.245 "adrfam": "ipv4", 00:13:48.245 "trsvcid": "4420", 00:13:48.245 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:48.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:48.245 "prchk_reftag": false, 00:13:48.245 "prchk_guard": false, 00:13:48.245 "hdgst": false, 00:13:48.245 "ddgst": false, 00:13:48.245 "dhchap_key": "key1", 00:13:48.245 "dhchap_ctrlr_key": "ckey1", 00:13:48.245 "allow_unrecognized_csi": false, 00:13:48.245 "method": "bdev_nvme_attach_controller", 00:13:48.245 "req_id": 1 00:13:48.245 } 00:13:48.245 Got JSON-RPC error response 00:13:48.246 response: 00:13:48.246 { 00:13:48.246 "code": -5, 00:13:48.246 "message": "Input/output error" 00:13:48.246 } 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 80115 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 80115 ']' 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 80115 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80115 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80115' 00:13:48.246 killing process with pid 80115 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 80115 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 80115 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=83157 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 83157 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 83157 ']' 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.246 16:08:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 83157 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 83157 ']' 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:48.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:48.506 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.074 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:49.074 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:49.074 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:49.074 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.074 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.074 null0 00:13:49.074 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.074 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kPz 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.oPh ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.oPh 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.DRp 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.1So ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1So 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.E6E 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.dW8 ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dW8 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.ptz 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:49.075 16:08:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:50.012 nvme0n1 00:13:50.012 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:50.012 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:50.012 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:50.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:50.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:50.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:50.271 { 00:13:50.271 "cntlid": 1, 00:13:50.271 "qid": 0, 00:13:50.271 "state": "enabled", 00:13:50.271 "thread": "nvmf_tgt_poll_group_000", 00:13:50.271 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:50.271 "listen_address": { 00:13:50.271 "trtype": "TCP", 00:13:50.271 "adrfam": "IPv4", 00:13:50.271 "traddr": "10.0.0.3", 00:13:50.271 "trsvcid": "4420" 00:13:50.271 }, 00:13:50.271 "peer_address": { 00:13:50.271 "trtype": "TCP", 00:13:50.271 "adrfam": "IPv4", 00:13:50.271 "traddr": "10.0.0.1", 00:13:50.271 "trsvcid": "56048" 00:13:50.271 }, 00:13:50.271 "auth": { 00:13:50.271 "state": "completed", 00:13:50.271 "digest": "sha512", 00:13:50.271 "dhgroup": "ffdhe8192" 00:13:50.271 } 00:13:50.271 } 00:13:50.271 ]' 00:13:50.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:50.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:50.271 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:50.531 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:50.531 16:08:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:50.531 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:50.531 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:50.531 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:50.790 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:50.790 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:51.358 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:51.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:51.358 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:51.358 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.358 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.358 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.358 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key3 00:13:51.358 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.358 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.358 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.358 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:51.358 16:08:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:51.617 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:51.617 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:51.617 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:51.617 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:51.617 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.617 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:51.617 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:51.617 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:51.617 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:51.617 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.185 request: 00:13:52.185 { 00:13:52.185 "name": "nvme0", 00:13:52.185 "trtype": "tcp", 00:13:52.185 "traddr": "10.0.0.3", 00:13:52.185 "adrfam": "ipv4", 00:13:52.185 "trsvcid": "4420", 00:13:52.185 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:52.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:52.185 "prchk_reftag": false, 00:13:52.185 "prchk_guard": false, 00:13:52.185 "hdgst": false, 00:13:52.185 "ddgst": false, 00:13:52.185 "dhchap_key": "key3", 00:13:52.185 "allow_unrecognized_csi": false, 00:13:52.185 "method": "bdev_nvme_attach_controller", 00:13:52.185 "req_id": 1 00:13:52.185 } 00:13:52.185 Got JSON-RPC error response 00:13:52.185 response: 00:13:52.185 { 00:13:52.185 "code": -5, 00:13:52.185 "message": "Input/output error" 00:13:52.185 } 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.185 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:52.186 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.186 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.186 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.186 16:08:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.444 request: 00:13:52.444 { 00:13:52.445 "name": "nvme0", 00:13:52.445 "trtype": "tcp", 00:13:52.445 "traddr": "10.0.0.3", 00:13:52.445 "adrfam": "ipv4", 00:13:52.445 "trsvcid": "4420", 00:13:52.445 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:52.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:52.445 "prchk_reftag": false, 00:13:52.445 "prchk_guard": false, 00:13:52.445 "hdgst": false, 00:13:52.445 "ddgst": false, 00:13:52.445 "dhchap_key": "key3", 00:13:52.445 "allow_unrecognized_csi": false, 00:13:52.445 "method": "bdev_nvme_attach_controller", 00:13:52.445 "req_id": 1 00:13:52.445 } 00:13:52.445 Got JSON-RPC error response 00:13:52.445 response: 00:13:52.445 { 00:13:52.445 "code": -5, 00:13:52.445 "message": "Input/output error" 00:13:52.445 } 00:13:52.445 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:52.445 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:52.445 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:52.445 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:52.445 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:52.445 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:52.445 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:52.445 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:52.445 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:52.445 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:52.703 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:52.704 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:53.272 request: 00:13:53.272 { 00:13:53.272 "name": "nvme0", 00:13:53.272 "trtype": "tcp", 00:13:53.272 "traddr": "10.0.0.3", 00:13:53.272 "adrfam": "ipv4", 00:13:53.272 "trsvcid": "4420", 00:13:53.272 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:53.272 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:53.272 "prchk_reftag": false, 00:13:53.272 "prchk_guard": false, 00:13:53.272 "hdgst": false, 00:13:53.272 "ddgst": false, 00:13:53.272 "dhchap_key": "key0", 00:13:53.272 "dhchap_ctrlr_key": "key1", 00:13:53.272 "allow_unrecognized_csi": false, 00:13:53.272 "method": "bdev_nvme_attach_controller", 00:13:53.272 "req_id": 1 00:13:53.272 } 00:13:53.272 Got JSON-RPC error response 00:13:53.272 response: 00:13:53.272 { 00:13:53.272 "code": -5, 00:13:53.272 "message": "Input/output error" 00:13:53.272 } 00:13:53.272 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:53.272 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:53.272 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:53.272 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:53.272 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:53.272 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:53.272 16:08:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:53.531 nvme0n1 00:13:53.531 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:53.531 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.531 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:53.789 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.789 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.789 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.049 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 00:13:54.049 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.049 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.049 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.049 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:54.049 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:54.049 16:09:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:54.987 nvme0n1 00:13:54.987 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:54.987 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.987 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:55.246 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.246 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:55.246 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.246 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.246 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.246 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:13:55.246 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:55.246 16:09:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.815 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.815 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:55.815 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid 92a6f107-e459-4aaa-bfee-246c0e15cbd1 -l 0 --dhchap-secret DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: --dhchap-ctrl-secret DHHC-1:03:ZjUyYzY4ZGU4OWY3YWIxMDAxZjk1MTNmYWZhNmIwM2Q4OWZiYTZmZGJiYjZlNzc1NzVmMjY0MzkxNzU3ZDU1Nx9pc1Q=: 00:13:56.384 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:13:56.384 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:13:56.384 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:13:56.384 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:13:56.384 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:13:56.384 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:13:56.384 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:13:56.384 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:56.384 16:09:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:56.643 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:13:56.643 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:13:56.643 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:13:56.643 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:13:56.643 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.643 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:13:56.643 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:56.643 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:56.643 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:56.643 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:57.211 request: 00:13:57.211 { 00:13:57.211 "name": "nvme0", 00:13:57.211 "trtype": "tcp", 00:13:57.211 "traddr": "10.0.0.3", 00:13:57.211 "adrfam": "ipv4", 00:13:57.211 "trsvcid": "4420", 00:13:57.211 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:57.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1", 00:13:57.211 "prchk_reftag": false, 00:13:57.211 "prchk_guard": false, 00:13:57.211 "hdgst": false, 00:13:57.211 "ddgst": false, 00:13:57.211 "dhchap_key": "key1", 00:13:57.211 "allow_unrecognized_csi": false, 00:13:57.211 "method": "bdev_nvme_attach_controller", 00:13:57.211 "req_id": 1 00:13:57.211 } 00:13:57.211 Got JSON-RPC error response 00:13:57.211 response: 00:13:57.211 { 00:13:57.211 "code": -5, 00:13:57.211 "message": "Input/output error" 00:13:57.211 } 00:13:57.211 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:13:57.211 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:57.211 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:57.211 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:57.211 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:57.211 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:57.211 16:09:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:58.147 nvme0n1 00:13:58.147 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:13:58.147 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.147 16:09:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:13:58.407 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.407 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.407 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.666 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:13:58.666 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.666 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.666 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.666 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:13:58.666 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:58.666 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:13:59.234 nvme0n1 00:13:59.234 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:13:59.234 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.234 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:13:59.234 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.234 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.234 16:09:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key key3 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: '' 2s 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: ]] 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTEwNTUwNGY1OTU2ZDYwMGY2YWE1ZTY3MmVmOGIzZDcwql67: 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:13:59.494 16:09:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: 2s 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: ]] 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTBlZjllNmE2MzQwYTdjZDRkYTU0NGQzN2ZlM2Q2YzcwYjZhOWJhYmE0MTE5OTVhGW45FQ==: 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:02.089 16:09:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.992 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.992 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:03.993 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:03.993 16:09:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:04.561 nvme0n1 00:14:04.561 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:04.561 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.561 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.561 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.561 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:04.561 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:05.497 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:05.497 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:05.497 16:09:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.497 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.497 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:14:05.497 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.497 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.497 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.497 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:05.497 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:05.756 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:05.756 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.756 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:06.324 16:09:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:06.582 request: 00:14:06.582 { 00:14:06.582 "name": "nvme0", 00:14:06.582 "dhchap_key": "key1", 00:14:06.582 "dhchap_ctrlr_key": "key3", 00:14:06.582 "method": "bdev_nvme_set_keys", 00:14:06.582 "req_id": 1 00:14:06.582 } 00:14:06.582 Got JSON-RPC error response 00:14:06.582 response: 00:14:06.582 { 00:14:06.582 "code": -13, 00:14:06.582 "message": "Permission denied" 00:14:06.582 } 00:14:06.840 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:06.840 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:06.840 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:06.840 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:06.840 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:06.840 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.840 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:07.099 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:07.099 16:09:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:08.036 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:08.036 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.036 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:08.296 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:08.296 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:08.296 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.296 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.296 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.296 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:08.296 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:08.296 16:09:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:09.234 nvme0n1 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:09.234 16:09:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:09.802 request: 00:14:09.802 { 00:14:09.802 "name": "nvme0", 00:14:09.802 "dhchap_key": "key2", 00:14:09.802 "dhchap_ctrlr_key": "key0", 00:14:09.802 "method": "bdev_nvme_set_keys", 00:14:09.802 "req_id": 1 00:14:09.802 } 00:14:09.802 Got JSON-RPC error response 00:14:09.802 response: 00:14:09.802 { 00:14:09.802 "code": -13, 00:14:09.802 "message": "Permission denied" 00:14:09.802 } 00:14:10.061 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:14:10.061 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:10.061 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:10.061 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:10.061 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:10.061 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:10.061 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.320 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:10.320 16:09:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:11.257 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:11.257 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.257 16:09:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 80140 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 80140 ']' 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 80140 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80140 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:11.516 killing process with pid 80140 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80140' 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 80140 00:14:11.516 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 80140 00:14:11.776 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:11.776 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:11.776 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:11.776 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:11.776 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:11.776 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:11.776 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:11.776 rmmod nvme_tcp 00:14:12.035 rmmod nvme_fabrics 00:14:12.035 rmmod nvme_keyring 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 83157 ']' 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 83157 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 83157 ']' 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 83157 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83157 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:12.035 killing process with pid 83157 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83157' 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 83157 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 83157 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:12.035 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.kPz /tmp/spdk.key-sha256.DRp /tmp/spdk.key-sha384.E6E /tmp/spdk.key-sha512.ptz /tmp/spdk.key-sha512.oPh /tmp/spdk.key-sha384.1So /tmp/spdk.key-sha256.dW8 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:12.295 00:14:12.295 real 3m6.163s 00:14:12.295 user 7m26.919s 00:14:12.295 sys 0m28.388s 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.295 16:09:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.295 ************************************ 00:14:12.295 END TEST nvmf_auth_target 00:14:12.295 ************************************ 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:12.556 ************************************ 00:14:12.556 START TEST nvmf_bdevio_no_huge 00:14:12.556 ************************************ 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:12.556 * Looking for test storage... 00:14:12.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:12.556 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:12.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.557 --rc genhtml_branch_coverage=1 00:14:12.557 --rc genhtml_function_coverage=1 00:14:12.557 --rc genhtml_legend=1 00:14:12.557 --rc geninfo_all_blocks=1 00:14:12.557 --rc geninfo_unexecuted_blocks=1 00:14:12.557 00:14:12.557 ' 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:12.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.557 --rc genhtml_branch_coverage=1 00:14:12.557 --rc genhtml_function_coverage=1 00:14:12.557 --rc genhtml_legend=1 00:14:12.557 --rc geninfo_all_blocks=1 00:14:12.557 --rc geninfo_unexecuted_blocks=1 00:14:12.557 00:14:12.557 ' 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:12.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.557 --rc genhtml_branch_coverage=1 00:14:12.557 --rc genhtml_function_coverage=1 00:14:12.557 --rc genhtml_legend=1 00:14:12.557 --rc geninfo_all_blocks=1 00:14:12.557 --rc geninfo_unexecuted_blocks=1 00:14:12.557 00:14:12.557 ' 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:12.557 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:12.557 --rc genhtml_branch_coverage=1 00:14:12.557 --rc genhtml_function_coverage=1 00:14:12.557 --rc genhtml_legend=1 00:14:12.557 --rc geninfo_all_blocks=1 00:14:12.557 --rc geninfo_unexecuted_blocks=1 00:14:12.557 00:14:12.557 ' 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.557 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:12.557 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:12.558 Cannot find device "nvmf_init_br" 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:12.558 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:12.817 Cannot find device "nvmf_init_br2" 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:12.817 Cannot find device "nvmf_tgt_br" 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:12.817 Cannot find device "nvmf_tgt_br2" 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:12.817 Cannot find device "nvmf_init_br" 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:12.817 Cannot find device "nvmf_init_br2" 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:12.817 Cannot find device "nvmf_tgt_br" 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:12.817 Cannot find device "nvmf_tgt_br2" 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:12.817 Cannot find device "nvmf_br" 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:12.817 Cannot find device "nvmf_init_if" 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:12.817 Cannot find device "nvmf_init_if2" 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:12.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:12.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:12.817 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:12.818 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:12.818 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:13.076 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:13.076 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:13.076 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:13.076 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:13.076 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:13.076 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:13.076 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:13.077 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:13.077 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:13.077 00:14:13.077 --- 10.0.0.3 ping statistics --- 00:14:13.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.077 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:13.077 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:13.077 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:14:13.077 00:14:13.077 --- 10.0.0.4 ping statistics --- 00:14:13.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.077 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:13.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:13.077 00:14:13.077 --- 10.0.0.1 ping statistics --- 00:14:13.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.077 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:13.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:14:13.077 00:14:13.077 --- 10.0.0.2 ping statistics --- 00:14:13.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.077 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=83785 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 83785 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 83785 ']' 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:13.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:13.077 16:09:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.077 [2024-11-19 16:09:19.722353] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:14:13.077 [2024-11-19 16:09:19.722469] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:13.336 [2024-11-19 16:09:19.884462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.336 [2024-11-19 16:09:19.942965] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.336 [2024-11-19 16:09:19.943041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.336 [2024-11-19 16:09:19.943056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.336 [2024-11-19 16:09:19.943067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.336 [2024-11-19 16:09:19.943076] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.336 [2024-11-19 16:09:19.943686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:14:13.336 [2024-11-19 16:09:19.943949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:14:13.336 [2024-11-19 16:09:19.944078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:14:13.336 [2024-11-19 16:09:19.944381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.336 [2024-11-19 16:09:19.950302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.614 [2024-11-19 16:09:20.141403] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.614 Malloc0 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:13.614 [2024-11-19 16:09:20.182940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:14:13.614 { 00:14:13.614 "params": { 00:14:13.614 "name": "Nvme$subsystem", 00:14:13.614 "trtype": "$TEST_TRANSPORT", 00:14:13.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:13.614 "adrfam": "ipv4", 00:14:13.614 "trsvcid": "$NVMF_PORT", 00:14:13.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:13.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:13.614 "hdgst": ${hdgst:-false}, 00:14:13.614 "ddgst": ${ddgst:-false} 00:14:13.614 }, 00:14:13.614 "method": "bdev_nvme_attach_controller" 00:14:13.614 } 00:14:13.614 EOF 00:14:13.614 )") 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:14:13.614 16:09:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:14:13.614 "params": { 00:14:13.614 "name": "Nvme1", 00:14:13.614 "trtype": "tcp", 00:14:13.614 "traddr": "10.0.0.3", 00:14:13.614 "adrfam": "ipv4", 00:14:13.614 "trsvcid": "4420", 00:14:13.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:13.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:13.614 "hdgst": false, 00:14:13.614 "ddgst": false 00:14:13.614 }, 00:14:13.614 "method": "bdev_nvme_attach_controller" 00:14:13.614 }' 00:14:13.614 [2024-11-19 16:09:20.268535] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:14:13.614 [2024-11-19 16:09:20.268698] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid83814 ] 00:14:13.894 [2024-11-19 16:09:20.446840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:13.894 [2024-11-19 16:09:20.506653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.894 [2024-11-19 16:09:20.506909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.894 [2024-11-19 16:09:20.507013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.894 [2024-11-19 16:09:20.521942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:14.153 I/O targets: 00:14:14.153 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:14.153 00:14:14.153 00:14:14.153 CUnit - A unit testing framework for C - Version 2.1-3 00:14:14.153 http://cunit.sourceforge.net/ 00:14:14.153 00:14:14.153 00:14:14.153 Suite: bdevio tests on: Nvme1n1 00:14:14.153 Test: blockdev write read block ...passed 00:14:14.153 Test: blockdev write zeroes read block ...passed 00:14:14.153 Test: blockdev write zeroes read no split ...passed 00:14:14.153 Test: blockdev write zeroes read split ...passed 00:14:14.153 Test: blockdev write zeroes read split partial ...passed 00:14:14.153 Test: blockdev reset ...[2024-11-19 16:09:20.745796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:14:14.153 [2024-11-19 16:09:20.746027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x66c430 (9): Bad file descriptor 00:14:14.153 passed 00:14:14.153 Test: blockdev write read 8 blocks ...[2024-11-19 16:09:20.765363] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:14:14.153 passed 00:14:14.153 Test: blockdev write read size > 128k ...passed 00:14:14.153 Test: blockdev write read invalid size ...passed 00:14:14.153 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:14.153 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:14.153 Test: blockdev write read max offset ...passed 00:14:14.153 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:14.153 Test: blockdev writev readv 8 blocks ...passed 00:14:14.153 Test: blockdev writev readv 30 x 1block ...passed 00:14:14.153 Test: blockdev writev readv block ...passed 00:14:14.153 Test: blockdev writev readv size > 128k ...passed 00:14:14.153 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:14.153 Test: blockdev comparev and writev ...[2024-11-19 16:09:20.776553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.153 [2024-11-19 16:09:20.776635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:14.153 [2024-11-19 16:09:20.776662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.153 [2024-11-19 16:09:20.776675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:14.153 [2024-11-19 16:09:20.777224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.153 [2024-11-19 16:09:20.777288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:14.153 [2024-11-19 16:09:20.777312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.153 [2024-11-19 16:09:20.777333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:14.153 [2024-11-19 16:09:20.777799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.153 [2024-11-19 16:09:20.777849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:14.153 [2024-11-19 16:09:20.777872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.153 [2024-11-19 16:09:20.777897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:14.153 [2024-11-19 16:09:20.778366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.153 [2024-11-19 16:09:20.778415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:14.153 [2024-11-19 16:09:20.778437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:14.154 [2024-11-19 16:09:20.778449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:14.154 passed 00:14:14.154 Test: blockdev nvme passthru rw ...passed 00:14:14.154 Test: blockdev nvme passthru vendor specific ...[2024-11-19 16:09:20.780102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.154 [2024-11-19 16:09:20.780370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:14.154 [2024-11-19 16:09:20.780699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.154 [2024-11-19 16:09:20.780735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:14.154 [2024-11-19 16:09:20.781053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.154 [2024-11-19 16:09:20.781088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:14.154 [2024-11-19 16:09:20.781401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:14.154 [2024-11-19 16:09:20.781436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:14.154 passed 00:14:14.154 Test: blockdev nvme admin passthru ...passed 00:14:14.154 Test: blockdev copy ...passed 00:14:14.154 00:14:14.154 Run Summary: Type Total Ran Passed Failed Inactive 00:14:14.154 suites 1 1 n/a 0 0 00:14:14.154 tests 23 23 23 0 0 00:14:14.154 asserts 152 152 152 0 n/a 00:14:14.154 00:14:14.154 Elapsed time = 0.171 seconds 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:14.413 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:14.413 rmmod nvme_tcp 00:14:14.672 rmmod nvme_fabrics 00:14:14.672 rmmod nvme_keyring 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 83785 ']' 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 83785 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 83785 ']' 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 83785 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83785 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:14:14.672 killing process with pid 83785 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83785' 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 83785 00:14:14.672 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 83785 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:14.931 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:15.190 00:14:15.190 real 0m2.786s 00:14:15.190 user 0m7.522s 00:14:15.190 sys 0m1.289s 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:15.190 ************************************ 00:14:15.190 END TEST nvmf_bdevio_no_huge 00:14:15.190 ************************************ 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:15.190 ************************************ 00:14:15.190 START TEST nvmf_tls 00:14:15.190 ************************************ 00:14:15.190 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:15.450 * Looking for test storage... 00:14:15.450 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:15.450 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:15.450 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:14:15.450 16:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:15.450 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:15.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.451 --rc genhtml_branch_coverage=1 00:14:15.451 --rc genhtml_function_coverage=1 00:14:15.451 --rc genhtml_legend=1 00:14:15.451 --rc geninfo_all_blocks=1 00:14:15.451 --rc geninfo_unexecuted_blocks=1 00:14:15.451 00:14:15.451 ' 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:15.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.451 --rc genhtml_branch_coverage=1 00:14:15.451 --rc genhtml_function_coverage=1 00:14:15.451 --rc genhtml_legend=1 00:14:15.451 --rc geninfo_all_blocks=1 00:14:15.451 --rc geninfo_unexecuted_blocks=1 00:14:15.451 00:14:15.451 ' 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:15.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.451 --rc genhtml_branch_coverage=1 00:14:15.451 --rc genhtml_function_coverage=1 00:14:15.451 --rc genhtml_legend=1 00:14:15.451 --rc geninfo_all_blocks=1 00:14:15.451 --rc geninfo_unexecuted_blocks=1 00:14:15.451 00:14:15.451 ' 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:15.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:15.451 --rc genhtml_branch_coverage=1 00:14:15.451 --rc genhtml_function_coverage=1 00:14:15.451 --rc genhtml_legend=1 00:14:15.451 --rc geninfo_all_blocks=1 00:14:15.451 --rc geninfo_unexecuted_blocks=1 00:14:15.451 00:14:15.451 ' 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:15.451 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:15.451 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:15.452 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:15.452 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.452 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:15.452 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:15.452 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:15.452 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:15.452 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:15.452 Cannot find device "nvmf_init_br" 00:14:15.452 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:15.452 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:15.452 Cannot find device "nvmf_init_br2" 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:15.710 Cannot find device "nvmf_tgt_br" 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:15.710 Cannot find device "nvmf_tgt_br2" 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:15.710 Cannot find device "nvmf_init_br" 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:15.710 Cannot find device "nvmf_init_br2" 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:15.710 Cannot find device "nvmf_tgt_br" 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:15.710 Cannot find device "nvmf_tgt_br2" 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:15.710 Cannot find device "nvmf_br" 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:15.710 Cannot find device "nvmf_init_if" 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:15.710 Cannot find device "nvmf_init_if2" 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:15.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:15.710 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:15.710 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:15.711 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:15.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:15.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:14:15.970 00:14:15.970 --- 10.0.0.3 ping statistics --- 00:14:15.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.970 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:15.970 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:15.970 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:15.970 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:14:15.970 00:14:15.970 --- 10.0.0.4 ping statistics --- 00:14:15.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.971 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:15.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:14:15.971 00:14:15.971 --- 10.0.0.1 ping statistics --- 00:14:15.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.971 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:15.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:15.971 00:14:15.971 --- 10.0.0.2 ping statistics --- 00:14:15.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.971 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84051 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84051 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84051 ']' 00:14:15.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:15.971 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:15.971 [2024-11-19 16:09:22.612094] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:14:15.971 [2024-11-19 16:09:22.612185] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.230 [2024-11-19 16:09:22.768723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:16.230 [2024-11-19 16:09:22.791770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.230 [2024-11-19 16:09:22.791828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.230 [2024-11-19 16:09:22.791841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.230 [2024-11-19 16:09:22.791852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.230 [2024-11-19 16:09:22.791861] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.230 [2024-11-19 16:09:22.792198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.230 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:16.230 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:16.230 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:16.230 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:16.230 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:16.230 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:16.230 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:16.230 16:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:16.489 true 00:14:16.490 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:16.490 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:17.058 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:17.058 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:17.058 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:17.058 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:17.058 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:17.317 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:17.317 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:17.317 16:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:17.576 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:17.576 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:17.835 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:17.835 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:17.835 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:17.835 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:18.094 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:18.094 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:18.094 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:18.353 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:18.353 16:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:18.612 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:18.612 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:18.612 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:18.871 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:18.871 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.fjQdx5dqyP 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.6uGShkfUU2 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.fjQdx5dqyP 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.6uGShkfUU2 00:14:19.130 16:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:19.389 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:19.648 [2024-11-19 16:09:26.331897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.908 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.fjQdx5dqyP 00:14:19.908 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fjQdx5dqyP 00:14:19.908 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:19.908 [2024-11-19 16:09:26.595663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.908 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:20.167 16:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:20.426 [2024-11-19 16:09:27.047789] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:20.426 [2024-11-19 16:09:27.048280] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:20.426 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:20.684 malloc0 00:14:20.684 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:20.943 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fjQdx5dqyP 00:14:21.202 16:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:21.462 16:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.fjQdx5dqyP 00:14:33.669 Initializing NVMe Controllers 00:14:33.669 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:33.669 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:33.669 Initialization complete. Launching workers. 00:14:33.669 ======================================================== 00:14:33.669 Latency(us) 00:14:33.669 Device Information : IOPS MiB/s Average min max 00:14:33.669 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10015.89 39.12 6391.82 1411.05 10719.20 00:14:33.669 ======================================================== 00:14:33.669 Total : 10015.89 39.12 6391.82 1411.05 10719.20 00:14:33.669 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fjQdx5dqyP 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fjQdx5dqyP 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84277 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84277 /var/tmp/bdevperf.sock 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84277 ']' 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:33.669 [2024-11-19 16:09:38.261511] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:14:33.669 [2024-11-19 16:09:38.261805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84277 ] 00:14:33.669 [2024-11-19 16:09:38.414630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.669 [2024-11-19 16:09:38.441361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.669 [2024-11-19 16:09:38.477929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fjQdx5dqyP 00:14:33.669 16:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:33.669 [2024-11-19 16:09:39.061795] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:33.669 TLSTESTn1 00:14:33.669 16:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:33.669 Running I/O for 10 seconds... 00:14:34.606 3840.00 IOPS, 15.00 MiB/s [2024-11-19T16:09:42.699Z] 4018.50 IOPS, 15.70 MiB/s [2024-11-19T16:09:43.636Z] 4010.67 IOPS, 15.67 MiB/s [2024-11-19T16:09:44.573Z] 4032.00 IOPS, 15.75 MiB/s [2024-11-19T16:09:45.509Z] 4019.20 IOPS, 15.70 MiB/s [2024-11-19T16:09:46.445Z] 4035.33 IOPS, 15.76 MiB/s [2024-11-19T16:09:47.382Z] 4025.57 IOPS, 15.72 MiB/s [2024-11-19T16:09:48.319Z] 4043.62 IOPS, 15.80 MiB/s [2024-11-19T16:09:49.713Z] 4076.89 IOPS, 15.93 MiB/s [2024-11-19T16:09:49.713Z] 4098.40 IOPS, 16.01 MiB/s 00:14:42.998 Latency(us) 00:14:42.998 [2024-11-19T16:09:49.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.998 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:42.998 Verification LBA range: start 0x0 length 0x2000 00:14:42.998 TLSTESTn1 : 10.01 4105.09 16.04 0.00 0.00 31129.55 4825.83 29669.93 00:14:42.998 [2024-11-19T16:09:49.713Z] =================================================================================================================== 00:14:42.998 [2024-11-19T16:09:49.713Z] Total : 4105.09 16.04 0.00 0.00 31129.55 4825.83 29669.93 00:14:42.998 { 00:14:42.998 "results": [ 00:14:42.998 { 00:14:42.998 "job": "TLSTESTn1", 00:14:42.998 "core_mask": "0x4", 00:14:42.998 "workload": "verify", 00:14:42.998 "status": "finished", 00:14:42.998 "verify_range": { 00:14:42.998 "start": 0, 00:14:42.998 "length": 8192 00:14:42.998 }, 00:14:42.998 "queue_depth": 128, 00:14:42.998 "io_size": 4096, 00:14:42.998 "runtime": 10.013654, 00:14:42.998 "iops": 4105.094903418872, 00:14:42.998 "mibps": 16.035526966479967, 00:14:42.998 "io_failed": 0, 00:14:42.998 "io_timeout": 0, 00:14:42.998 "avg_latency_us": 31129.54818595373, 00:14:42.998 "min_latency_us": 4825.832727272727, 00:14:42.998 "max_latency_us": 29669.934545454544 00:14:42.998 } 00:14:42.998 ], 00:14:42.998 "core_count": 1 00:14:42.998 } 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84277 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84277 ']' 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84277 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84277 00:14:42.998 killing process with pid 84277 00:14:42.998 Received shutdown signal, test time was about 10.000000 seconds 00:14:42.998 00:14:42.998 Latency(us) 00:14:42.998 [2024-11-19T16:09:49.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:42.998 [2024-11-19T16:09:49.713Z] =================================================================================================================== 00:14:42.998 [2024-11-19T16:09:49.713Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84277' 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84277 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84277 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6uGShkfUU2 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6uGShkfUU2 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6uGShkfUU2 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.6uGShkfUU2 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84404 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84404 /var/tmp/bdevperf.sock 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84404 ']' 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:42.998 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.998 [2024-11-19 16:09:49.533658] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:14:42.998 [2024-11-19 16:09:49.534005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84404 ] 00:14:42.998 [2024-11-19 16:09:49.689661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.257 [2024-11-19 16:09:49.715309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.257 [2024-11-19 16:09:49.750485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:43.257 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:43.257 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:43.257 16:09:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.6uGShkfUU2 00:14:43.516 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:43.775 [2024-11-19 16:09:50.361874] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:43.775 [2024-11-19 16:09:50.371440] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spd[2024-11-19 16:09:50.371586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142c210 (107): Transport endpoint is not connected 00:14:43.775 k_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:43.775 [2024-11-19 16:09:50.372577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142c210 (9): Bad file descriptor 00:14:43.775 [2024-11-19 16:09:50.373573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:43.775 [2024-11-19 16:09:50.373603] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:43.775 [2024-11-19 16:09:50.373630] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:43.775 [2024-11-19 16:09:50.373646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:43.775 request: 00:14:43.775 { 00:14:43.775 "name": "TLSTEST", 00:14:43.775 "trtype": "tcp", 00:14:43.775 "traddr": "10.0.0.3", 00:14:43.775 "adrfam": "ipv4", 00:14:43.775 "trsvcid": "4420", 00:14:43.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:43.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:43.775 "prchk_reftag": false, 00:14:43.775 "prchk_guard": false, 00:14:43.775 "hdgst": false, 00:14:43.775 "ddgst": false, 00:14:43.775 "psk": "key0", 00:14:43.775 "allow_unrecognized_csi": false, 00:14:43.775 "method": "bdev_nvme_attach_controller", 00:14:43.775 "req_id": 1 00:14:43.775 } 00:14:43.775 Got JSON-RPC error response 00:14:43.775 response: 00:14:43.775 { 00:14:43.775 "code": -5, 00:14:43.775 "message": "Input/output error" 00:14:43.775 } 00:14:43.775 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84404 00:14:43.776 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84404 ']' 00:14:43.776 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84404 00:14:43.776 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:43.776 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:43.776 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84404 00:14:43.776 killing process with pid 84404 00:14:43.776 Received shutdown signal, test time was about 10.000000 seconds 00:14:43.776 00:14:43.776 Latency(us) 00:14:43.776 [2024-11-19T16:09:50.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.776 [2024-11-19T16:09:50.491Z] =================================================================================================================== 00:14:43.776 [2024-11-19T16:09:50.491Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:43.776 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:43.776 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:43.776 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84404' 00:14:43.776 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84404 00:14:43.776 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84404 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fjQdx5dqyP 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fjQdx5dqyP 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.fjQdx5dqyP 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:44.035 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fjQdx5dqyP 00:14:44.036 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:44.036 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84425 00:14:44.036 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:44.036 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:44.036 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84425 /var/tmp/bdevperf.sock 00:14:44.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:44.036 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84425 ']' 00:14:44.036 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:44.036 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.036 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:44.036 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.036 16:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:44.036 [2024-11-19 16:09:50.624123] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:14:44.036 [2024-11-19 16:09:50.624218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84425 ] 00:14:44.295 [2024-11-19 16:09:50.772470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.295 [2024-11-19 16:09:50.792609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.295 [2024-11-19 16:09:50.823159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:45.231 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.231 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:45.231 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fjQdx5dqyP 00:14:45.231 16:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:45.491 [2024-11-19 16:09:52.087576] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:45.491 [2024-11-19 16:09:52.092767] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:45.491 [2024-11-19 16:09:52.093047] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:45.491 [2024-11-19 16:09:52.093122] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:45.491 [2024-11-19 16:09:52.093454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbd210 (107): Transport endpoint is not connected 00:14:45.491 [2024-11-19 16:09:52.094439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbd210 (9): Bad file descriptor 00:14:45.491 [2024-11-19 16:09:52.095435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:14:45.491 [2024-11-19 16:09:52.095469] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:45.491 [2024-11-19 16:09:52.095481] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:45.491 [2024-11-19 16:09:52.095498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:14:45.491 request: 00:14:45.491 { 00:14:45.491 "name": "TLSTEST", 00:14:45.491 "trtype": "tcp", 00:14:45.491 "traddr": "10.0.0.3", 00:14:45.491 "adrfam": "ipv4", 00:14:45.491 "trsvcid": "4420", 00:14:45.491 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.491 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:45.491 "prchk_reftag": false, 00:14:45.491 "prchk_guard": false, 00:14:45.491 "hdgst": false, 00:14:45.491 "ddgst": false, 00:14:45.491 "psk": "key0", 00:14:45.491 "allow_unrecognized_csi": false, 00:14:45.491 "method": "bdev_nvme_attach_controller", 00:14:45.491 "req_id": 1 00:14:45.491 } 00:14:45.491 Got JSON-RPC error response 00:14:45.491 response: 00:14:45.491 { 00:14:45.491 "code": -5, 00:14:45.491 "message": "Input/output error" 00:14:45.491 } 00:14:45.491 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84425 00:14:45.491 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84425 ']' 00:14:45.491 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84425 00:14:45.491 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:45.491 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:45.491 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84425 00:14:45.491 killing process with pid 84425 00:14:45.491 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.491 00:14:45.491 Latency(us) 00:14:45.491 [2024-11-19T16:09:52.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.491 [2024-11-19T16:09:52.206Z] =================================================================================================================== 00:14:45.491 [2024-11-19T16:09:52.206Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:45.491 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:45.491 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:45.491 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84425' 00:14:45.491 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84425 00:14:45.491 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84425 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fjQdx5dqyP 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fjQdx5dqyP 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.fjQdx5dqyP 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fjQdx5dqyP 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84459 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:45.750 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:45.751 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84459 /var/tmp/bdevperf.sock 00:14:45.751 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84459 ']' 00:14:45.751 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:45.751 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.751 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:45.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:45.751 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.751 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:45.751 [2024-11-19 16:09:52.354626] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:14:45.751 [2024-11-19 16:09:52.354988] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84459 ] 00:14:46.009 [2024-11-19 16:09:52.507793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.009 [2024-11-19 16:09:52.529378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.009 [2024-11-19 16:09:52.559651] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.009 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:46.009 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:46.009 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fjQdx5dqyP 00:14:46.267 16:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:46.525 [2024-11-19 16:09:53.159980] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:46.525 [2024-11-19 16:09:53.171573] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:46.525 [2024-11-19 16:09:53.171613] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:46.525 [2024-11-19 16:09:53.171678] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:46.525 [2024-11-19 16:09:53.172904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f4210 (107): Transport endpoint is not connected 00:14:46.525 [2024-11-19 16:09:53.173688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15f4210 (9): Bad file descriptor 00:14:46.525 [2024-11-19 16:09:53.174693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:14:46.525 [2024-11-19 16:09:53.174720] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:46.525 [2024-11-19 16:09:53.174746] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:46.525 [2024-11-19 16:09:53.174761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:14:46.525 request: 00:14:46.525 { 00:14:46.525 "name": "TLSTEST", 00:14:46.525 "trtype": "tcp", 00:14:46.525 "traddr": "10.0.0.3", 00:14:46.525 "adrfam": "ipv4", 00:14:46.525 "trsvcid": "4420", 00:14:46.525 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:46.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:46.525 "prchk_reftag": false, 00:14:46.525 "prchk_guard": false, 00:14:46.525 "hdgst": false, 00:14:46.525 "ddgst": false, 00:14:46.525 "psk": "key0", 00:14:46.525 "allow_unrecognized_csi": false, 00:14:46.525 "method": "bdev_nvme_attach_controller", 00:14:46.525 "req_id": 1 00:14:46.526 } 00:14:46.526 Got JSON-RPC error response 00:14:46.526 response: 00:14:46.526 { 00:14:46.526 "code": -5, 00:14:46.526 "message": "Input/output error" 00:14:46.526 } 00:14:46.526 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84459 00:14:46.526 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84459 ']' 00:14:46.526 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84459 00:14:46.526 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:46.526 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:46.526 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84459 00:14:46.526 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84459' 00:14:46.785 killing process with pid 84459 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84459 00:14:46.785 Received shutdown signal, test time was about 10.000000 seconds 00:14:46.785 00:14:46.785 Latency(us) 00:14:46.785 [2024-11-19T16:09:53.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.785 [2024-11-19T16:09:53.500Z] =================================================================================================================== 00:14:46.785 [2024-11-19T16:09:53.500Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84459 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84480 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84480 /var/tmp/bdevperf.sock 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84480 ']' 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:46.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.785 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:46.785 [2024-11-19 16:09:53.417478] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:14:46.785 [2024-11-19 16:09:53.417568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84480 ] 00:14:47.044 [2024-11-19 16:09:53.561492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.044 [2024-11-19 16:09:53.583048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.044 [2024-11-19 16:09:53.612827] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:47.044 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.044 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:47.044 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:47.302 [2024-11-19 16:09:53.953551] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:47.302 [2024-11-19 16:09:53.953611] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:47.302 request: 00:14:47.302 { 00:14:47.302 "name": "key0", 00:14:47.302 "path": "", 00:14:47.302 "method": "keyring_file_add_key", 00:14:47.302 "req_id": 1 00:14:47.302 } 00:14:47.302 Got JSON-RPC error response 00:14:47.302 response: 00:14:47.302 { 00:14:47.302 "code": -1, 00:14:47.302 "message": "Operation not permitted" 00:14:47.302 } 00:14:47.302 16:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:47.561 [2024-11-19 16:09:54.197705] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:47.561 [2024-11-19 16:09:54.197787] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:47.561 request: 00:14:47.561 { 00:14:47.561 "name": "TLSTEST", 00:14:47.561 "trtype": "tcp", 00:14:47.561 "traddr": "10.0.0.3", 00:14:47.561 "adrfam": "ipv4", 00:14:47.561 "trsvcid": "4420", 00:14:47.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:47.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:47.561 "prchk_reftag": false, 00:14:47.561 "prchk_guard": false, 00:14:47.561 "hdgst": false, 00:14:47.561 "ddgst": false, 00:14:47.561 "psk": "key0", 00:14:47.561 "allow_unrecognized_csi": false, 00:14:47.561 "method": "bdev_nvme_attach_controller", 00:14:47.561 "req_id": 1 00:14:47.561 } 00:14:47.561 Got JSON-RPC error response 00:14:47.561 response: 00:14:47.561 { 00:14:47.561 "code": -126, 00:14:47.561 "message": "Required key not available" 00:14:47.561 } 00:14:47.561 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84480 00:14:47.561 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84480 ']' 00:14:47.561 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84480 00:14:47.561 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:47.561 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.561 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84480 00:14:47.561 killing process with pid 84480 00:14:47.561 Received shutdown signal, test time was about 10.000000 seconds 00:14:47.561 00:14:47.561 Latency(us) 00:14:47.561 [2024-11-19T16:09:54.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.561 [2024-11-19T16:09:54.276Z] =================================================================================================================== 00:14:47.561 [2024-11-19T16:09:54.276Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:47.561 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:14:47.561 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:14:47.561 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84480' 00:14:47.561 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84480 00:14:47.561 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84480 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 84051 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84051 ']' 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84051 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84051 00:14:47.820 killing process with pid 84051 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84051' 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84051 00:14:47.820 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84051 00:14:48.079 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:48.079 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:48.079 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:14:48.079 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:14:48.079 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:48.079 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:14:48.079 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.tTehi7oCd0 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.tTehi7oCd0 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84511 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84511 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84511 ']' 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:48.080 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.080 [2024-11-19 16:09:54.669509] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:14:48.080 [2024-11-19 16:09:54.670314] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.339 [2024-11-19 16:09:54.819778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.339 [2024-11-19 16:09:54.837505] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.339 [2024-11-19 16:09:54.837559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.339 [2024-11-19 16:09:54.837585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.339 [2024-11-19 16:09:54.837592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.339 [2024-11-19 16:09:54.837598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.339 [2024-11-19 16:09:54.837867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.339 [2024-11-19 16:09:54.869153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:48.339 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:48.339 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:48.339 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:48.339 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:48.339 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.339 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:48.339 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.tTehi7oCd0 00:14:48.339 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tTehi7oCd0 00:14:48.339 16:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:48.597 [2024-11-19 16:09:55.288983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.597 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:49.164 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:49.164 [2024-11-19 16:09:55.821146] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:49.164 [2024-11-19 16:09:55.821467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:49.164 16:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:49.423 malloc0 00:14:49.423 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:49.682 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tTehi7oCd0 00:14:49.941 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tTehi7oCd0 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tTehi7oCd0 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84565 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84565 /var/tmp/bdevperf.sock 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84565 ']' 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.201 16:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.201 [2024-11-19 16:09:56.867552] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:14:50.201 [2024-11-19 16:09:56.867845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84565 ] 00:14:50.459 [2024-11-19 16:09:57.018552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.459 [2024-11-19 16:09:57.044046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.459 [2024-11-19 16:09:57.078932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.459 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.459 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:14:50.459 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tTehi7oCd0 00:14:50.718 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:50.976 [2024-11-19 16:09:57.594609] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:50.976 TLSTESTn1 00:14:50.976 16:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:51.235 Running I/O for 10 seconds... 00:14:53.104 4073.00 IOPS, 15.91 MiB/s [2024-11-19T16:10:01.197Z] 4109.00 IOPS, 16.05 MiB/s [2024-11-19T16:10:02.133Z] 4089.67 IOPS, 15.98 MiB/s [2024-11-19T16:10:03.071Z] 4067.50 IOPS, 15.89 MiB/s [2024-11-19T16:10:04.009Z] 4015.20 IOPS, 15.68 MiB/s [2024-11-19T16:10:04.947Z] 3980.33 IOPS, 15.55 MiB/s [2024-11-19T16:10:05.884Z] 3949.71 IOPS, 15.43 MiB/s [2024-11-19T16:10:06.821Z] 3928.25 IOPS, 15.34 MiB/s [2024-11-19T16:10:08.210Z] 3934.78 IOPS, 15.37 MiB/s [2024-11-19T16:10:08.210Z] 3957.20 IOPS, 15.46 MiB/s 00:15:01.495 Latency(us) 00:15:01.495 [2024-11-19T16:10:08.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.495 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:01.495 Verification LBA range: start 0x0 length 0x2000 00:15:01.495 TLSTESTn1 : 10.02 3963.55 15.48 0.00 0.00 32240.08 4855.62 33602.09 00:15:01.495 [2024-11-19T16:10:08.210Z] =================================================================================================================== 00:15:01.495 [2024-11-19T16:10:08.210Z] Total : 3963.55 15.48 0.00 0.00 32240.08 4855.62 33602.09 00:15:01.495 { 00:15:01.495 "results": [ 00:15:01.495 { 00:15:01.495 "job": "TLSTESTn1", 00:15:01.495 "core_mask": "0x4", 00:15:01.495 "workload": "verify", 00:15:01.495 "status": "finished", 00:15:01.495 "verify_range": { 00:15:01.495 "start": 0, 00:15:01.495 "length": 8192 00:15:01.495 }, 00:15:01.495 "queue_depth": 128, 00:15:01.495 "io_size": 4096, 00:15:01.495 "runtime": 10.015272, 00:15:01.495 "iops": 3963.5468712182756, 00:15:01.495 "mibps": 15.482604965696389, 00:15:01.495 "io_failed": 0, 00:15:01.495 "io_timeout": 0, 00:15:01.495 "avg_latency_us": 32240.07811512953, 00:15:01.495 "min_latency_us": 4855.6218181818185, 00:15:01.495 "max_latency_us": 33602.09454545454 00:15:01.495 } 00:15:01.495 ], 00:15:01.495 "core_count": 1 00:15:01.495 } 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 84565 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84565 ']' 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84565 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84565 00:15:01.495 killing process with pid 84565 00:15:01.495 Received shutdown signal, test time was about 10.000000 seconds 00:15:01.495 00:15:01.495 Latency(us) 00:15:01.495 [2024-11-19T16:10:08.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.495 [2024-11-19T16:10:08.210Z] =================================================================================================================== 00:15:01.495 [2024-11-19T16:10:08.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84565' 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84565 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84565 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.tTehi7oCd0 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tTehi7oCd0 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tTehi7oCd0 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tTehi7oCd0 00:15:01.495 16:10:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:01.495 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:01.495 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.tTehi7oCd0 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=84688 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 84688 /var/tmp/bdevperf.sock 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84688 ']' 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:01.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.496 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.496 [2024-11-19 16:10:08.057878] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:01.496 [2024-11-19 16:10:08.058167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84688 ] 00:15:01.755 [2024-11-19 16:10:08.211065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.755 [2024-11-19 16:10:08.235663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:01.755 [2024-11-19 16:10:08.269097] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:01.755 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.755 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:01.755 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tTehi7oCd0 00:15:02.014 [2024-11-19 16:10:08.635437] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tTehi7oCd0': 0100666 00:15:02.014 [2024-11-19 16:10:08.635490] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:02.014 request: 00:15:02.014 { 00:15:02.014 "name": "key0", 00:15:02.014 "path": "/tmp/tmp.tTehi7oCd0", 00:15:02.014 "method": "keyring_file_add_key", 00:15:02.014 "req_id": 1 00:15:02.014 } 00:15:02.014 Got JSON-RPC error response 00:15:02.014 response: 00:15:02.014 { 00:15:02.014 "code": -1, 00:15:02.014 "message": "Operation not permitted" 00:15:02.014 } 00:15:02.014 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:02.273 [2024-11-19 16:10:08.883580] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:02.273 [2024-11-19 16:10:08.883666] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:02.273 request: 00:15:02.273 { 00:15:02.273 "name": "TLSTEST", 00:15:02.273 "trtype": "tcp", 00:15:02.273 "traddr": "10.0.0.3", 00:15:02.273 "adrfam": "ipv4", 00:15:02.273 "trsvcid": "4420", 00:15:02.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.273 "prchk_reftag": false, 00:15:02.273 "prchk_guard": false, 00:15:02.273 "hdgst": false, 00:15:02.273 "ddgst": false, 00:15:02.273 "psk": "key0", 00:15:02.273 "allow_unrecognized_csi": false, 00:15:02.273 "method": "bdev_nvme_attach_controller", 00:15:02.273 "req_id": 1 00:15:02.273 } 00:15:02.273 Got JSON-RPC error response 00:15:02.273 response: 00:15:02.273 { 00:15:02.273 "code": -126, 00:15:02.273 "message": "Required key not available" 00:15:02.273 } 00:15:02.273 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 84688 00:15:02.273 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84688 ']' 00:15:02.273 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84688 00:15:02.273 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:02.273 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.273 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84688 00:15:02.273 killing process with pid 84688 00:15:02.273 Received shutdown signal, test time was about 10.000000 seconds 00:15:02.273 00:15:02.273 Latency(us) 00:15:02.273 [2024-11-19T16:10:08.988Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:02.273 [2024-11-19T16:10:08.988Z] =================================================================================================================== 00:15:02.273 [2024-11-19T16:10:08.988Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:02.273 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:02.273 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:02.273 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84688' 00:15:02.273 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84688 00:15:02.273 16:10:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84688 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 84511 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84511 ']' 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84511 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84511 00:15:02.532 killing process with pid 84511 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84511' 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84511 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84511 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84719 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84719 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84719 ']' 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:02.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:02.532 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.791 [2024-11-19 16:10:09.284694] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:02.791 [2024-11-19 16:10:09.284802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:02.791 [2024-11-19 16:10:09.437806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.791 [2024-11-19 16:10:09.460281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:02.791 [2024-11-19 16:10:09.460341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:02.791 [2024-11-19 16:10:09.460360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:02.791 [2024-11-19 16:10:09.460371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:02.791 [2024-11-19 16:10:09.460380] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:02.791 [2024-11-19 16:10:09.460717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:02.791 [2024-11-19 16:10:09.492619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.tTehi7oCd0 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.tTehi7oCd0 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.tTehi7oCd0 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tTehi7oCd0 00:15:03.051 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:03.309 [2024-11-19 16:10:09.870480] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:03.309 16:10:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:03.568 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:03.827 [2024-11-19 16:10:10.514629] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:03.827 [2024-11-19 16:10:10.514853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:03.827 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:04.394 malloc0 00:15:04.394 16:10:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:04.652 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tTehi7oCd0 00:15:04.652 [2024-11-19 16:10:11.324930] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.tTehi7oCd0': 0100666 00:15:04.652 [2024-11-19 16:10:11.324978] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:04.652 request: 00:15:04.652 { 00:15:04.652 "name": "key0", 00:15:04.652 "path": "/tmp/tmp.tTehi7oCd0", 00:15:04.652 "method": "keyring_file_add_key", 00:15:04.652 "req_id": 1 00:15:04.652 } 00:15:04.652 Got JSON-RPC error response 00:15:04.652 response: 00:15:04.652 { 00:15:04.653 "code": -1, 00:15:04.653 "message": "Operation not permitted" 00:15:04.653 } 00:15:04.653 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:05.220 [2024-11-19 16:10:11.641032] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:05.220 [2024-11-19 16:10:11.641115] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:05.220 request: 00:15:05.220 { 00:15:05.220 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:05.220 "host": "nqn.2016-06.io.spdk:host1", 00:15:05.220 "psk": "key0", 00:15:05.220 "method": "nvmf_subsystem_add_host", 00:15:05.220 "req_id": 1 00:15:05.221 } 00:15:05.221 Got JSON-RPC error response 00:15:05.221 response: 00:15:05.221 { 00:15:05.221 "code": -32603, 00:15:05.221 "message": "Internal error" 00:15:05.221 } 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 84719 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84719 ']' 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84719 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84719 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:05.221 killing process with pid 84719 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84719' 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84719 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84719 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.tTehi7oCd0 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84775 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84775 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84775 ']' 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.221 16:10:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.221 [2024-11-19 16:10:11.885880] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:05.221 [2024-11-19 16:10:11.885975] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.480 [2024-11-19 16:10:12.031949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.480 [2024-11-19 16:10:12.050830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:05.480 [2024-11-19 16:10:12.050914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:05.480 [2024-11-19 16:10:12.050926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:05.480 [2024-11-19 16:10:12.050934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:05.480 [2024-11-19 16:10:12.050942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:05.480 [2024-11-19 16:10:12.051273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.481 [2024-11-19 16:10:12.079716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:05.481 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:05.481 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:05.481 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:05.481 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:05.481 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:05.481 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.481 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.tTehi7oCd0 00:15:05.481 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tTehi7oCd0 00:15:05.481 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:06.049 [2024-11-19 16:10:12.463962] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.049 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:06.307 16:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:06.565 [2024-11-19 16:10:13.084084] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:06.565 [2024-11-19 16:10:13.084365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:06.565 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:06.823 malloc0 00:15:06.823 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:07.390 16:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tTehi7oCd0 00:15:07.649 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:07.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:07.934 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=84829 00:15:07.934 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:07.934 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:07.934 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 84829 /var/tmp/bdevperf.sock 00:15:07.934 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84829 ']' 00:15:07.934 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:07.934 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:07.934 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:07.934 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:07.934 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.934 [2024-11-19 16:10:14.581909] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:07.934 [2024-11-19 16:10:14.582210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84829 ] 00:15:08.214 [2024-11-19 16:10:14.730796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.214 [2024-11-19 16:10:14.755505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.214 [2024-11-19 16:10:14.789588] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.214 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:08.214 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:08.214 16:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tTehi7oCd0 00:15:08.473 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:08.732 [2024-11-19 16:10:15.424746] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:08.991 TLSTESTn1 00:15:08.991 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:09.252 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:09.252 "subsystems": [ 00:15:09.252 { 00:15:09.252 "subsystem": "keyring", 00:15:09.252 "config": [ 00:15:09.252 { 00:15:09.252 "method": "keyring_file_add_key", 00:15:09.252 "params": { 00:15:09.252 "name": "key0", 00:15:09.252 "path": "/tmp/tmp.tTehi7oCd0" 00:15:09.252 } 00:15:09.252 } 00:15:09.252 ] 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "subsystem": "iobuf", 00:15:09.252 "config": [ 00:15:09.252 { 00:15:09.252 "method": "iobuf_set_options", 00:15:09.252 "params": { 00:15:09.252 "small_pool_count": 8192, 00:15:09.252 "large_pool_count": 1024, 00:15:09.252 "small_bufsize": 8192, 00:15:09.252 "large_bufsize": 135168, 00:15:09.252 "enable_numa": false 00:15:09.252 } 00:15:09.252 } 00:15:09.252 ] 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "subsystem": "sock", 00:15:09.252 "config": [ 00:15:09.252 { 00:15:09.252 "method": "sock_set_default_impl", 00:15:09.252 "params": { 00:15:09.252 "impl_name": "uring" 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "sock_impl_set_options", 00:15:09.252 "params": { 00:15:09.252 "impl_name": "ssl", 00:15:09.252 "recv_buf_size": 4096, 00:15:09.252 "send_buf_size": 4096, 00:15:09.252 "enable_recv_pipe": true, 00:15:09.252 "enable_quickack": false, 00:15:09.252 "enable_placement_id": 0, 00:15:09.252 "enable_zerocopy_send_server": true, 00:15:09.252 "enable_zerocopy_send_client": false, 00:15:09.252 "zerocopy_threshold": 0, 00:15:09.252 "tls_version": 0, 00:15:09.252 "enable_ktls": false 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "sock_impl_set_options", 00:15:09.252 "params": { 00:15:09.252 "impl_name": "posix", 00:15:09.252 "recv_buf_size": 2097152, 00:15:09.252 "send_buf_size": 2097152, 00:15:09.252 "enable_recv_pipe": true, 00:15:09.252 "enable_quickack": false, 00:15:09.252 "enable_placement_id": 0, 00:15:09.252 "enable_zerocopy_send_server": true, 00:15:09.252 "enable_zerocopy_send_client": false, 00:15:09.252 "zerocopy_threshold": 0, 00:15:09.252 "tls_version": 0, 00:15:09.252 "enable_ktls": false 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "sock_impl_set_options", 00:15:09.252 "params": { 00:15:09.252 "impl_name": "uring", 00:15:09.252 "recv_buf_size": 2097152, 00:15:09.252 "send_buf_size": 2097152, 00:15:09.252 "enable_recv_pipe": true, 00:15:09.252 "enable_quickack": false, 00:15:09.252 "enable_placement_id": 0, 00:15:09.252 "enable_zerocopy_send_server": false, 00:15:09.252 "enable_zerocopy_send_client": false, 00:15:09.252 "zerocopy_threshold": 0, 00:15:09.252 "tls_version": 0, 00:15:09.252 "enable_ktls": false 00:15:09.252 } 00:15:09.252 } 00:15:09.252 ] 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "subsystem": "vmd", 00:15:09.252 "config": [] 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "subsystem": "accel", 00:15:09.252 "config": [ 00:15:09.252 { 00:15:09.252 "method": "accel_set_options", 00:15:09.252 "params": { 00:15:09.252 "small_cache_size": 128, 00:15:09.252 "large_cache_size": 16, 00:15:09.252 "task_count": 2048, 00:15:09.252 "sequence_count": 2048, 00:15:09.252 "buf_count": 2048 00:15:09.252 } 00:15:09.252 } 00:15:09.252 ] 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "subsystem": "bdev", 00:15:09.252 "config": [ 00:15:09.252 { 00:15:09.252 "method": "bdev_set_options", 00:15:09.252 "params": { 00:15:09.252 "bdev_io_pool_size": 65535, 00:15:09.252 "bdev_io_cache_size": 256, 00:15:09.252 "bdev_auto_examine": true, 00:15:09.252 "iobuf_small_cache_size": 128, 00:15:09.252 "iobuf_large_cache_size": 16 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "bdev_raid_set_options", 00:15:09.252 "params": { 00:15:09.252 "process_window_size_kb": 1024, 00:15:09.252 "process_max_bandwidth_mb_sec": 0 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "bdev_iscsi_set_options", 00:15:09.252 "params": { 00:15:09.252 "timeout_sec": 30 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "bdev_nvme_set_options", 00:15:09.252 "params": { 00:15:09.252 "action_on_timeout": "none", 00:15:09.252 "timeout_us": 0, 00:15:09.252 "timeout_admin_us": 0, 00:15:09.252 "keep_alive_timeout_ms": 10000, 00:15:09.252 "arbitration_burst": 0, 00:15:09.252 "low_priority_weight": 0, 00:15:09.252 "medium_priority_weight": 0, 00:15:09.252 "high_priority_weight": 0, 00:15:09.252 "nvme_adminq_poll_period_us": 10000, 00:15:09.252 "nvme_ioq_poll_period_us": 0, 00:15:09.252 "io_queue_requests": 0, 00:15:09.252 "delay_cmd_submit": true, 00:15:09.252 "transport_retry_count": 4, 00:15:09.252 "bdev_retry_count": 3, 00:15:09.252 "transport_ack_timeout": 0, 00:15:09.252 "ctrlr_loss_timeout_sec": 0, 00:15:09.252 "reconnect_delay_sec": 0, 00:15:09.252 "fast_io_fail_timeout_sec": 0, 00:15:09.252 "disable_auto_failback": false, 00:15:09.252 "generate_uuids": false, 00:15:09.252 "transport_tos": 0, 00:15:09.252 "nvme_error_stat": false, 00:15:09.252 "rdma_srq_size": 0, 00:15:09.252 "io_path_stat": false, 00:15:09.252 "allow_accel_sequence": false, 00:15:09.252 "rdma_max_cq_size": 0, 00:15:09.252 "rdma_cm_event_timeout_ms": 0, 00:15:09.252 "dhchap_digests": [ 00:15:09.252 "sha256", 00:15:09.252 "sha384", 00:15:09.252 "sha512" 00:15:09.252 ], 00:15:09.252 "dhchap_dhgroups": [ 00:15:09.252 "null", 00:15:09.252 "ffdhe2048", 00:15:09.252 "ffdhe3072", 00:15:09.252 "ffdhe4096", 00:15:09.252 "ffdhe6144", 00:15:09.252 "ffdhe8192" 00:15:09.252 ] 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "bdev_nvme_set_hotplug", 00:15:09.252 "params": { 00:15:09.252 "period_us": 100000, 00:15:09.252 "enable": false 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "bdev_malloc_create", 00:15:09.252 "params": { 00:15:09.252 "name": "malloc0", 00:15:09.252 "num_blocks": 8192, 00:15:09.252 "block_size": 4096, 00:15:09.252 "physical_block_size": 4096, 00:15:09.252 "uuid": "6e651adb-3405-440c-95f7-5b1189ae8735", 00:15:09.252 "optimal_io_boundary": 0, 00:15:09.252 "md_size": 0, 00:15:09.252 "dif_type": 0, 00:15:09.252 "dif_is_head_of_md": false, 00:15:09.252 "dif_pi_format": 0 00:15:09.252 } 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "method": "bdev_wait_for_examine" 00:15:09.252 } 00:15:09.252 ] 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "subsystem": "nbd", 00:15:09.252 "config": [] 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "subsystem": "scheduler", 00:15:09.252 "config": [ 00:15:09.252 { 00:15:09.252 "method": "framework_set_scheduler", 00:15:09.252 "params": { 00:15:09.252 "name": "static" 00:15:09.252 } 00:15:09.252 } 00:15:09.252 ] 00:15:09.252 }, 00:15:09.252 { 00:15:09.252 "subsystem": "nvmf", 00:15:09.252 "config": [ 00:15:09.252 { 00:15:09.252 "method": "nvmf_set_config", 00:15:09.252 "params": { 00:15:09.252 "discovery_filter": "match_any", 00:15:09.252 "admin_cmd_passthru": { 00:15:09.252 "identify_ctrlr": false 00:15:09.253 }, 00:15:09.253 "dhchap_digests": [ 00:15:09.253 "sha256", 00:15:09.253 "sha384", 00:15:09.253 "sha512" 00:15:09.253 ], 00:15:09.253 "dhchap_dhgroups": [ 00:15:09.253 "null", 00:15:09.253 "ffdhe2048", 00:15:09.253 "ffdhe3072", 00:15:09.253 "ffdhe4096", 00:15:09.253 "ffdhe6144", 00:15:09.253 "ffdhe8192" 00:15:09.253 ] 00:15:09.253 } 00:15:09.253 }, 00:15:09.253 { 00:15:09.253 "method": "nvmf_set_max_subsystems", 00:15:09.253 "params": { 00:15:09.253 "max_subsystems": 1024 00:15:09.253 } 00:15:09.253 }, 00:15:09.253 { 00:15:09.253 "method": "nvmf_set_crdt", 00:15:09.253 "params": { 00:15:09.253 "crdt1": 0, 00:15:09.253 "crdt2": 0, 00:15:09.253 "crdt3": 0 00:15:09.253 } 00:15:09.253 }, 00:15:09.253 { 00:15:09.253 "method": "nvmf_create_transport", 00:15:09.253 "params": { 00:15:09.253 "trtype": "TCP", 00:15:09.253 "max_queue_depth": 128, 00:15:09.253 "max_io_qpairs_per_ctrlr": 127, 00:15:09.253 "in_capsule_data_size": 4096, 00:15:09.253 "max_io_size": 131072, 00:15:09.253 "io_unit_size": 131072, 00:15:09.253 "max_aq_depth": 128, 00:15:09.253 "num_shared_buffers": 511, 00:15:09.253 "buf_cache_size": 4294967295, 00:15:09.253 "dif_insert_or_strip": false, 00:15:09.253 "zcopy": false, 00:15:09.253 "c2h_success": false, 00:15:09.253 "sock_priority": 0, 00:15:09.253 "abort_timeout_sec": 1, 00:15:09.253 "ack_timeout": 0, 00:15:09.253 "data_wr_pool_size": 0 00:15:09.253 } 00:15:09.253 }, 00:15:09.253 { 00:15:09.253 "method": "nvmf_create_subsystem", 00:15:09.253 "params": { 00:15:09.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.253 "allow_any_host": false, 00:15:09.253 "serial_number": "SPDK00000000000001", 00:15:09.253 "model_number": "SPDK bdev Controller", 00:15:09.253 "max_namespaces": 10, 00:15:09.253 "min_cntlid": 1, 00:15:09.253 "max_cntlid": 65519, 00:15:09.253 "ana_reporting": false 00:15:09.253 } 00:15:09.253 }, 00:15:09.253 { 00:15:09.253 "method": "nvmf_subsystem_add_host", 00:15:09.253 "params": { 00:15:09.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.253 "host": "nqn.2016-06.io.spdk:host1", 00:15:09.253 "psk": "key0" 00:15:09.253 } 00:15:09.253 }, 00:15:09.253 { 00:15:09.253 "method": "nvmf_subsystem_add_ns", 00:15:09.253 "params": { 00:15:09.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.253 "namespace": { 00:15:09.253 "nsid": 1, 00:15:09.253 "bdev_name": "malloc0", 00:15:09.253 "nguid": "6E651ADB3405440C95F75B1189AE8735", 00:15:09.253 "uuid": "6e651adb-3405-440c-95f7-5b1189ae8735", 00:15:09.253 "no_auto_visible": false 00:15:09.253 } 00:15:09.253 } 00:15:09.253 }, 00:15:09.253 { 00:15:09.253 "method": "nvmf_subsystem_add_listener", 00:15:09.253 "params": { 00:15:09.253 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.253 "listen_address": { 00:15:09.253 "trtype": "TCP", 00:15:09.253 "adrfam": "IPv4", 00:15:09.253 "traddr": "10.0.0.3", 00:15:09.253 "trsvcid": "4420" 00:15:09.253 }, 00:15:09.253 "secure_channel": true 00:15:09.253 } 00:15:09.253 } 00:15:09.253 ] 00:15:09.253 } 00:15:09.253 ] 00:15:09.253 }' 00:15:09.253 16:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:09.513 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:09.513 "subsystems": [ 00:15:09.513 { 00:15:09.513 "subsystem": "keyring", 00:15:09.513 "config": [ 00:15:09.513 { 00:15:09.513 "method": "keyring_file_add_key", 00:15:09.513 "params": { 00:15:09.513 "name": "key0", 00:15:09.513 "path": "/tmp/tmp.tTehi7oCd0" 00:15:09.513 } 00:15:09.513 } 00:15:09.513 ] 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "subsystem": "iobuf", 00:15:09.513 "config": [ 00:15:09.513 { 00:15:09.513 "method": "iobuf_set_options", 00:15:09.513 "params": { 00:15:09.513 "small_pool_count": 8192, 00:15:09.513 "large_pool_count": 1024, 00:15:09.513 "small_bufsize": 8192, 00:15:09.513 "large_bufsize": 135168, 00:15:09.513 "enable_numa": false 00:15:09.513 } 00:15:09.513 } 00:15:09.513 ] 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "subsystem": "sock", 00:15:09.513 "config": [ 00:15:09.513 { 00:15:09.513 "method": "sock_set_default_impl", 00:15:09.513 "params": { 00:15:09.513 "impl_name": "uring" 00:15:09.513 } 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "method": "sock_impl_set_options", 00:15:09.513 "params": { 00:15:09.513 "impl_name": "ssl", 00:15:09.513 "recv_buf_size": 4096, 00:15:09.513 "send_buf_size": 4096, 00:15:09.513 "enable_recv_pipe": true, 00:15:09.513 "enable_quickack": false, 00:15:09.513 "enable_placement_id": 0, 00:15:09.513 "enable_zerocopy_send_server": true, 00:15:09.513 "enable_zerocopy_send_client": false, 00:15:09.513 "zerocopy_threshold": 0, 00:15:09.513 "tls_version": 0, 00:15:09.513 "enable_ktls": false 00:15:09.513 } 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "method": "sock_impl_set_options", 00:15:09.513 "params": { 00:15:09.513 "impl_name": "posix", 00:15:09.513 "recv_buf_size": 2097152, 00:15:09.513 "send_buf_size": 2097152, 00:15:09.513 "enable_recv_pipe": true, 00:15:09.513 "enable_quickack": false, 00:15:09.513 "enable_placement_id": 0, 00:15:09.513 "enable_zerocopy_send_server": true, 00:15:09.513 "enable_zerocopy_send_client": false, 00:15:09.513 "zerocopy_threshold": 0, 00:15:09.513 "tls_version": 0, 00:15:09.513 "enable_ktls": false 00:15:09.513 } 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "method": "sock_impl_set_options", 00:15:09.513 "params": { 00:15:09.513 "impl_name": "uring", 00:15:09.513 "recv_buf_size": 2097152, 00:15:09.513 "send_buf_size": 2097152, 00:15:09.513 "enable_recv_pipe": true, 00:15:09.513 "enable_quickack": false, 00:15:09.513 "enable_placement_id": 0, 00:15:09.513 "enable_zerocopy_send_server": false, 00:15:09.513 "enable_zerocopy_send_client": false, 00:15:09.513 "zerocopy_threshold": 0, 00:15:09.513 "tls_version": 0, 00:15:09.513 "enable_ktls": false 00:15:09.513 } 00:15:09.513 } 00:15:09.513 ] 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "subsystem": "vmd", 00:15:09.513 "config": [] 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "subsystem": "accel", 00:15:09.513 "config": [ 00:15:09.513 { 00:15:09.513 "method": "accel_set_options", 00:15:09.513 "params": { 00:15:09.513 "small_cache_size": 128, 00:15:09.513 "large_cache_size": 16, 00:15:09.513 "task_count": 2048, 00:15:09.513 "sequence_count": 2048, 00:15:09.513 "buf_count": 2048 00:15:09.513 } 00:15:09.513 } 00:15:09.513 ] 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "subsystem": "bdev", 00:15:09.513 "config": [ 00:15:09.513 { 00:15:09.513 "method": "bdev_set_options", 00:15:09.513 "params": { 00:15:09.513 "bdev_io_pool_size": 65535, 00:15:09.513 "bdev_io_cache_size": 256, 00:15:09.513 "bdev_auto_examine": true, 00:15:09.513 "iobuf_small_cache_size": 128, 00:15:09.513 "iobuf_large_cache_size": 16 00:15:09.513 } 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "method": "bdev_raid_set_options", 00:15:09.513 "params": { 00:15:09.513 "process_window_size_kb": 1024, 00:15:09.513 "process_max_bandwidth_mb_sec": 0 00:15:09.513 } 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "method": "bdev_iscsi_set_options", 00:15:09.513 "params": { 00:15:09.513 "timeout_sec": 30 00:15:09.513 } 00:15:09.513 }, 00:15:09.513 { 00:15:09.513 "method": "bdev_nvme_set_options", 00:15:09.513 "params": { 00:15:09.513 "action_on_timeout": "none", 00:15:09.513 "timeout_us": 0, 00:15:09.513 "timeout_admin_us": 0, 00:15:09.513 "keep_alive_timeout_ms": 10000, 00:15:09.513 "arbitration_burst": 0, 00:15:09.513 "low_priority_weight": 0, 00:15:09.513 "medium_priority_weight": 0, 00:15:09.513 "high_priority_weight": 0, 00:15:09.513 "nvme_adminq_poll_period_us": 10000, 00:15:09.513 "nvme_ioq_poll_period_us": 0, 00:15:09.513 "io_queue_requests": 512, 00:15:09.513 "delay_cmd_submit": true, 00:15:09.513 "transport_retry_count": 4, 00:15:09.513 "bdev_retry_count": 3, 00:15:09.513 "transport_ack_timeout": 0, 00:15:09.514 "ctrlr_loss_timeout_sec": 0, 00:15:09.514 "reconnect_delay_sec": 0, 00:15:09.514 "fast_io_fail_timeout_sec": 0, 00:15:09.514 "disable_auto_failback": false, 00:15:09.514 "generate_uuids": false, 00:15:09.514 "transport_tos": 0, 00:15:09.514 "nvme_error_stat": false, 00:15:09.514 "rdma_srq_size": 0, 00:15:09.514 "io_path_stat": false, 00:15:09.514 "allow_accel_sequence": false, 00:15:09.514 "rdma_max_cq_size": 0, 00:15:09.514 "rdma_cm_event_timeout_ms": 0, 00:15:09.514 "dhchap_digests": [ 00:15:09.514 "sha256", 00:15:09.514 "sha384", 00:15:09.514 "sha512" 00:15:09.514 ], 00:15:09.514 "dhchap_dhgroups": [ 00:15:09.514 "null", 00:15:09.514 "ffdhe2048", 00:15:09.514 "ffdhe3072", 00:15:09.514 "ffdhe4096", 00:15:09.514 "ffdhe6144", 00:15:09.514 "ffdhe8192" 00:15:09.514 ] 00:15:09.514 } 00:15:09.514 }, 00:15:09.514 { 00:15:09.514 "method": "bdev_nvme_attach_controller", 00:15:09.514 "params": { 00:15:09.514 "name": "TLSTEST", 00:15:09.514 "trtype": "TCP", 00:15:09.514 "adrfam": "IPv4", 00:15:09.514 "traddr": "10.0.0.3", 00:15:09.514 "trsvcid": "4420", 00:15:09.514 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:09.514 "prchk_reftag": false, 00:15:09.514 "prchk_guard": false, 00:15:09.514 "ctrlr_loss_timeout_sec": 0, 00:15:09.514 "reconnect_delay_sec": 0, 00:15:09.514 "fast_io_fail_timeout_sec": 0, 00:15:09.514 "psk": "key0", 00:15:09.514 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:09.514 "hdgst": false, 00:15:09.514 "ddgst": false, 00:15:09.514 "multipath": "multipath" 00:15:09.514 } 00:15:09.514 }, 00:15:09.514 { 00:15:09.514 "method": "bdev_nvme_set_hotplug", 00:15:09.514 "params": { 00:15:09.514 "period_us": 100000, 00:15:09.514 "enable": false 00:15:09.514 } 00:15:09.514 }, 00:15:09.514 { 00:15:09.514 "method": "bdev_wait_for_examine" 00:15:09.514 } 00:15:09.514 ] 00:15:09.514 }, 00:15:09.514 { 00:15:09.514 "subsystem": "nbd", 00:15:09.514 "config": [] 00:15:09.514 } 00:15:09.514 ] 00:15:09.514 }' 00:15:09.514 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 84829 00:15:09.514 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84829 ']' 00:15:09.514 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84829 00:15:09.514 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:09.514 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.514 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84829 00:15:09.514 killing process with pid 84829 00:15:09.514 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.514 00:15:09.514 Latency(us) 00:15:09.514 [2024-11-19T16:10:16.229Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.514 [2024-11-19T16:10:16.229Z] =================================================================================================================== 00:15:09.514 [2024-11-19T16:10:16.229Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:09.514 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:09.514 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:09.514 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84829' 00:15:09.514 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84829 00:15:09.514 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84829 00:15:09.774 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 84775 00:15:09.774 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84775 ']' 00:15:09.774 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84775 00:15:09.774 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:09.774 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.774 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84775 00:15:09.774 killing process with pid 84775 00:15:09.774 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:09.774 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:09.774 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84775' 00:15:09.774 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84775 00:15:09.774 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84775 00:15:10.034 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:10.034 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:10.034 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:10.034 "subsystems": [ 00:15:10.034 { 00:15:10.034 "subsystem": "keyring", 00:15:10.034 "config": [ 00:15:10.034 { 00:15:10.034 "method": "keyring_file_add_key", 00:15:10.034 "params": { 00:15:10.034 "name": "key0", 00:15:10.034 "path": "/tmp/tmp.tTehi7oCd0" 00:15:10.034 } 00:15:10.034 } 00:15:10.034 ] 00:15:10.034 }, 00:15:10.034 { 00:15:10.034 "subsystem": "iobuf", 00:15:10.035 "config": [ 00:15:10.035 { 00:15:10.035 "method": "iobuf_set_options", 00:15:10.035 "params": { 00:15:10.035 "small_pool_count": 8192, 00:15:10.035 "large_pool_count": 1024, 00:15:10.035 "small_bufsize": 8192, 00:15:10.035 "large_bufsize": 135168, 00:15:10.035 "enable_numa": false 00:15:10.035 } 00:15:10.035 } 00:15:10.035 ] 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "subsystem": "sock", 00:15:10.035 "config": [ 00:15:10.035 { 00:15:10.035 "method": "sock_set_default_impl", 00:15:10.035 "params": { 00:15:10.035 "impl_name": "uring" 00:15:10.035 } 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "method": "sock_impl_set_options", 00:15:10.035 "params": { 00:15:10.035 "impl_name": "ssl", 00:15:10.035 "recv_buf_size": 4096, 00:15:10.035 "send_buf_size": 4096, 00:15:10.035 "enable_recv_pipe": true, 00:15:10.035 "enable_quickack": false, 00:15:10.035 "enable_placement_id": 0, 00:15:10.035 "enable_zerocopy_send_server": true, 00:15:10.035 "enable_zerocopy_send_client": false, 00:15:10.035 "zerocopy_threshold": 0, 00:15:10.035 "tls_version": 0, 00:15:10.035 "enable_ktls": false 00:15:10.035 } 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "method": "sock_impl_set_options", 00:15:10.035 "params": { 00:15:10.035 "impl_name": "posix", 00:15:10.035 "recv_buf_size": 2097152, 00:15:10.035 "send_buf_size": 2097152, 00:15:10.035 "enable_recv_pipe": true, 00:15:10.035 "enable_quickack": false, 00:15:10.035 "enable_placement_id": 0, 00:15:10.035 "enable_zerocopy_send_server": true, 00:15:10.035 "enable_zerocopy_send_client": false, 00:15:10.035 "zerocopy_threshold": 0, 00:15:10.035 "tls_version": 0, 00:15:10.035 "enable_ktls": false 00:15:10.035 } 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "method": "sock_impl_set_options", 00:15:10.035 "params": { 00:15:10.035 "impl_name": "uring", 00:15:10.035 "recv_buf_size": 2097152, 00:15:10.035 "send_buf_size": 2097152, 00:15:10.035 "enable_recv_pipe": true, 00:15:10.035 "enable_quickack": false, 00:15:10.035 "enable_placement_id": 0, 00:15:10.035 "enable_zerocopy_send_server": false, 00:15:10.035 "enable_zerocopy_send_client": false, 00:15:10.035 "zerocopy_threshold": 0, 00:15:10.035 "tls_version": 0, 00:15:10.035 "enable_ktls": false 00:15:10.035 } 00:15:10.035 } 00:15:10.035 ] 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "subsystem": "vmd", 00:15:10.035 "config": [] 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "subsystem": "accel", 00:15:10.035 "config": [ 00:15:10.035 { 00:15:10.035 "method": "accel_set_options", 00:15:10.035 "params": { 00:15:10.035 "small_cache_size": 128, 00:15:10.035 "large_cache_size": 16, 00:15:10.035 "task_count": 2048, 00:15:10.035 "sequence_count": 2048, 00:15:10.035 "buf_count": 2048 00:15:10.035 } 00:15:10.035 } 00:15:10.035 ] 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "subsystem": "bdev", 00:15:10.035 "config": [ 00:15:10.035 { 00:15:10.035 "method": "bdev_set_options", 00:15:10.035 "params": { 00:15:10.035 "bdev_io_pool_size": 65535, 00:15:10.035 "bdev_io_cache_size": 256, 00:15:10.035 "bdev_auto_examine": true, 00:15:10.035 "iobuf_small_cache_size": 128, 00:15:10.035 "iobuf_large_cache_size": 16 00:15:10.035 } 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "method": "bdev_raid_set_options", 00:15:10.035 "params": { 00:15:10.035 "process_window_size_kb": 1024, 00:15:10.035 "process_max_bandwidth_mb_sec": 0 00:15:10.035 } 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "method": "bdev_iscsi_set_options", 00:15:10.035 "params": { 00:15:10.035 "timeout_sec": 30 00:15:10.035 } 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "method": "bdev_nvme_set_options", 00:15:10.035 "params": { 00:15:10.035 "action_on_timeout": "none", 00:15:10.035 "timeout_us": 0, 00:15:10.035 "timeout_admin_us": 0, 00:15:10.035 "keep_alive_timeout_ms": 10000, 00:15:10.035 "arbitration_burst": 0, 00:15:10.035 "low_priority_weight": 0, 00:15:10.035 "medium_priority_weight": 0, 00:15:10.035 "high_priority_weight": 0, 00:15:10.035 "nvme_adminq_poll_period_us": 10000, 00:15:10.035 "nvme_ioq_poll_period_us": 0, 00:15:10.035 "io_queue_requests": 0, 00:15:10.035 "delay_cmd_submit": true, 00:15:10.035 "transport_retry_count": 4, 00:15:10.035 "bdev_retry_count": 3, 00:15:10.035 "transport_ack_timeout": 0, 00:15:10.035 "ctrlr_loss_timeout_sec": 0, 00:15:10.035 "reconnect_delay_sec": 0, 00:15:10.035 "fast_io_fail_timeout_sec": 0, 00:15:10.035 "disable_auto_failback": false, 00:15:10.035 "generate_uuids": false, 00:15:10.035 "transport_tos": 0, 00:15:10.035 "nvme_error_stat": false, 00:15:10.035 "rdma_srq_size": 0, 00:15:10.035 "io_path_stat": false, 00:15:10.035 "allow_accel_sequence": false, 00:15:10.035 "rdma_max_cq_size": 0, 00:15:10.035 "rdma_cm_event_timeout_ms": 0, 00:15:10.035 "dhchap_digests": [ 00:15:10.035 "sha256", 00:15:10.035 "sha384", 00:15:10.035 "sha512" 00:15:10.035 ], 00:15:10.035 "dhchap_dhgroups": [ 00:15:10.035 "null", 00:15:10.035 "ffdhe2048", 00:15:10.035 "ffdhe3072", 00:15:10.035 "ffdhe4096", 00:15:10.035 "ffdhe6144", 00:15:10.035 "ffdhe8192" 00:15:10.035 ] 00:15:10.035 } 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "method": "bdev_nvme_set_hotplug", 00:15:10.035 "params": { 00:15:10.035 "period_us": 100000, 00:15:10.035 "enable": false 00:15:10.035 } 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "method": "bdev_malloc_create", 00:15:10.035 "params": { 00:15:10.035 "name": "malloc0", 00:15:10.035 "num_blocks": 8192, 00:15:10.035 "block_size": 4096, 00:15:10.035 "physical_block_size": 4096, 00:15:10.035 "uuid": "6e651adb-3405-440c-95f7-5b1189ae8735", 00:15:10.035 "optimal_io_boundary": 0, 00:15:10.035 "md_size": 0, 00:15:10.035 "dif_type": 0, 00:15:10.035 "dif_is_head_of_md": false, 00:15:10.035 "dif_pi_format": 0 00:15:10.035 } 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "method": "bdev_wait_for_examine" 00:15:10.035 } 00:15:10.035 ] 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "subsystem": "nbd", 00:15:10.035 "config": [] 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "subsystem": "scheduler", 00:15:10.035 "config": [ 00:15:10.035 { 00:15:10.035 "method": "framework_set_scheduler", 00:15:10.035 "params": { 00:15:10.035 "name": "static" 00:15:10.035 } 00:15:10.035 } 00:15:10.035 ] 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "subsystem": "nvmf", 00:15:10.035 "config": [ 00:15:10.035 { 00:15:10.035 "method": "nvmf_set_config", 00:15:10.035 "params": { 00:15:10.035 "discovery_filter": "match_any", 00:15:10.035 "admin_cmd_passthru": { 00:15:10.035 "identify_ctrlr": false 00:15:10.035 }, 00:15:10.035 "dhchap_digests": [ 00:15:10.035 "sha256", 00:15:10.035 "sha384", 00:15:10.035 "sha512" 00:15:10.035 ], 00:15:10.035 "dhchap_dhgroups": [ 00:15:10.035 "null", 00:15:10.035 "ffdhe2048", 00:15:10.035 "ffdhe3072", 00:15:10.035 "ffdhe4096", 00:15:10.035 "ffdhe6144", 00:15:10.035 "ffdhe8192" 00:15:10.035 ] 00:15:10.035 } 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "method": "nvmf_set_max_subsystems", 00:15:10.035 "params": { 00:15:10.035 "max_subsystems": 1024 00:15:10.035 } 00:15:10.035 }, 00:15:10.035 { 00:15:10.035 "method": "nvmf_set_crdt", 00:15:10.035 "params": { 00:15:10.035 "crdt1": 0, 00:15:10.036 "crdt2": 0, 00:15:10.036 "crdt3": 0 00:15:10.036 } 00:15:10.036 }, 00:15:10.036 { 00:15:10.036 "method": "nvmf_create_transport", 00:15:10.036 "params": { 00:15:10.036 "trtype": "TCP", 00:15:10.036 "max_queue_depth": 128, 00:15:10.036 "max_io_qpairs_per_ctrlr": 127, 00:15:10.036 "in_capsule_data_size": 4096, 00:15:10.036 "max_io_size": 131072, 00:15:10.036 "io_unit_size": 131072, 00:15:10.036 "max_aq_depth": 128, 00:15:10.036 "num_shared_buffers": 511, 00:15:10.036 "buf_cache_size": 4294967295, 00:15:10.036 "dif_insert_or_strip": false, 00:15:10.036 "zcopy": false, 00:15:10.036 "c2h_success": false, 00:15:10.036 "sock_priority": 0, 00:15:10.036 "abort_timeout_sec": 1, 00:15:10.036 "ack_timeout": 0, 00:15:10.036 "data_wr_pool_size": 0 00:15:10.036 } 00:15:10.036 }, 00:15:10.036 { 00:15:10.036 "method": "nvmf_create_subsystem", 00:15:10.036 "params": { 00:15:10.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.036 "allow_any_host": false, 00:15:10.036 "serial_number": "SPDK00000000000001", 00:15:10.036 "model_number": "SPDK bdev Controller", 00:15:10.036 "max_namespaces": 10, 00:15:10.036 "min_cntlid": 1, 00:15:10.036 "max_cntlid": 65519, 00:15:10.036 "ana_reporting": false 00:15:10.036 } 00:15:10.036 }, 00:15:10.036 { 00:15:10.036 "method": "nvmf_subsystem_add_host", 00:15:10.036 "params": { 00:15:10.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.036 "host": "nqn.2016-06.io.spdk:host1", 00:15:10.036 "psk": "key0" 00:15:10.036 } 00:15:10.036 }, 00:15:10.036 { 00:15:10.036 "method": "nvmf_subsystem_add_ns", 00:15:10.036 "params": { 00:15:10.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.036 "namespace": { 00:15:10.036 "nsid": 1, 00:15:10.036 "bdev_name": "malloc0", 00:15:10.036 "nguid": "6E651ADB3405440C95F75B1189AE8735", 00:15:10.036 "uuid": "6e651adb-3405-440c-95f7-5b1189ae8735", 00:15:10.036 "no_auto_visible": false 00:15:10.036 } 00:15:10.036 } 00:15:10.036 }, 00:15:10.036 { 00:15:10.036 "method": "nvmf_subsystem_add_listener", 00:15:10.036 "params": { 00:15:10.036 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.036 "listen_address": { 00:15:10.036 "trtype": "TCP", 00:15:10.036 "adrfam": "IPv4", 00:15:10.036 "traddr": "10.0.0.3", 00:15:10.036 "trsvcid": "4420" 00:15:10.036 }, 00:15:10.036 "secure_channel": true 00:15:10.036 } 00:15:10.036 } 00:15:10.036 ] 00:15:10.036 } 00:15:10.036 ] 00:15:10.036 }' 00:15:10.036 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:10.036 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.036 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=84871 00:15:10.036 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:10.036 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 84871 00:15:10.036 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84871 ']' 00:15:10.036 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.036 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.036 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.036 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.036 16:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.036 [2024-11-19 16:10:16.563369] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:10.036 [2024-11-19 16:10:16.563872] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.036 [2024-11-19 16:10:16.711959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.036 [2024-11-19 16:10:16.731332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.036 [2024-11-19 16:10:16.731638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.036 [2024-11-19 16:10:16.731816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.036 [2024-11-19 16:10:16.731832] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.036 [2024-11-19 16:10:16.731840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.036 [2024-11-19 16:10:16.732220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.295 [2024-11-19 16:10:16.875751] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:10.295 [2024-11-19 16:10:16.931023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.295 [2024-11-19 16:10:16.962977] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:10.295 [2024-11-19 16:10:16.963182] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:10.864 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:10.864 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:10.864 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:10.864 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:10.864 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:11.123 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.123 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=84903 00:15:11.123 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 84903 /var/tmp/bdevperf.sock 00:15:11.123 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 84903 ']' 00:15:11.124 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:11.124 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:11.124 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:11.124 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:11.124 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:11.124 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:11.124 16:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:11.124 "subsystems": [ 00:15:11.124 { 00:15:11.124 "subsystem": "keyring", 00:15:11.124 "config": [ 00:15:11.124 { 00:15:11.124 "method": "keyring_file_add_key", 00:15:11.124 "params": { 00:15:11.124 "name": "key0", 00:15:11.124 "path": "/tmp/tmp.tTehi7oCd0" 00:15:11.124 } 00:15:11.124 } 00:15:11.124 ] 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "subsystem": "iobuf", 00:15:11.124 "config": [ 00:15:11.124 { 00:15:11.124 "method": "iobuf_set_options", 00:15:11.124 "params": { 00:15:11.124 "small_pool_count": 8192, 00:15:11.124 "large_pool_count": 1024, 00:15:11.124 "small_bufsize": 8192, 00:15:11.124 "large_bufsize": 135168, 00:15:11.124 "enable_numa": false 00:15:11.124 } 00:15:11.124 } 00:15:11.124 ] 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "subsystem": "sock", 00:15:11.124 "config": [ 00:15:11.124 { 00:15:11.124 "method": "sock_set_default_impl", 00:15:11.124 "params": { 00:15:11.124 "impl_name": "uring" 00:15:11.124 } 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "method": "sock_impl_set_options", 00:15:11.124 "params": { 00:15:11.124 "impl_name": "ssl", 00:15:11.124 "recv_buf_size": 4096, 00:15:11.124 "send_buf_size": 4096, 00:15:11.124 "enable_recv_pipe": true, 00:15:11.124 "enable_quickack": false, 00:15:11.124 "enable_placement_id": 0, 00:15:11.124 "enable_zerocopy_send_server": true, 00:15:11.124 "enable_zerocopy_send_client": false, 00:15:11.124 "zerocopy_threshold": 0, 00:15:11.124 "tls_version": 0, 00:15:11.124 "enable_ktls": false 00:15:11.124 } 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "method": "sock_impl_set_options", 00:15:11.124 "params": { 00:15:11.124 "impl_name": "posix", 00:15:11.124 "recv_buf_size": 2097152, 00:15:11.124 "send_buf_size": 2097152, 00:15:11.124 "enable_recv_pipe": true, 00:15:11.124 "enable_quickack": false, 00:15:11.124 "enable_placement_id": 0, 00:15:11.124 "enable_zerocopy_send_server": true, 00:15:11.124 "enable_zerocopy_send_client": false, 00:15:11.124 "zerocopy_threshold": 0, 00:15:11.124 "tls_version": 0, 00:15:11.124 "enable_ktls": false 00:15:11.124 } 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "method": "sock_impl_set_options", 00:15:11.124 "params": { 00:15:11.124 "impl_name": "uring", 00:15:11.124 "recv_buf_size": 2097152, 00:15:11.124 "send_buf_size": 2097152, 00:15:11.124 "enable_recv_pipe": true, 00:15:11.124 "enable_quickack": false, 00:15:11.124 "enable_placement_id": 0, 00:15:11.124 "enable_zerocopy_send_server": false, 00:15:11.124 "enable_zerocopy_send_client": false, 00:15:11.124 "zerocopy_threshold": 0, 00:15:11.124 "tls_version": 0, 00:15:11.124 "enable_ktls": false 00:15:11.124 } 00:15:11.124 } 00:15:11.124 ] 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "subsystem": "vmd", 00:15:11.124 "config": [] 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "subsystem": "accel", 00:15:11.124 "config": [ 00:15:11.124 { 00:15:11.124 "method": "accel_set_options", 00:15:11.124 "params": { 00:15:11.124 "small_cache_size": 128, 00:15:11.124 "large_cache_size": 16, 00:15:11.124 "task_count": 2048, 00:15:11.124 "sequence_count": 2048, 00:15:11.124 "buf_count": 2048 00:15:11.124 } 00:15:11.124 } 00:15:11.124 ] 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "subsystem": "bdev", 00:15:11.124 "config": [ 00:15:11.124 { 00:15:11.124 "method": "bdev_set_options", 00:15:11.124 "params": { 00:15:11.124 "bdev_io_pool_size": 65535, 00:15:11.124 "bdev_io_cache_size": 256, 00:15:11.124 "bdev_auto_examine": true, 00:15:11.124 "iobuf_small_cache_size": 128, 00:15:11.124 "iobuf_large_cache_size": 16 00:15:11.124 } 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "method": "bdev_raid_set_options", 00:15:11.124 "params": { 00:15:11.124 "process_window_size_kb": 1024, 00:15:11.124 "process_max_bandwidth_mb_sec": 0 00:15:11.124 } 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "method": "bdev_iscsi_set_options", 00:15:11.124 "params": { 00:15:11.124 "timeout_sec": 30 00:15:11.124 } 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "method": "bdev_nvme_set_options", 00:15:11.124 "params": { 00:15:11.124 "action_on_timeout": "none", 00:15:11.124 "timeout_us": 0, 00:15:11.124 "timeout_admin_us": 0, 00:15:11.124 "keep_alive_timeout_ms": 10000, 00:15:11.124 "arbitration_burst": 0, 00:15:11.124 "low_priority_weight": 0, 00:15:11.124 "medium_priority_weight": 0, 00:15:11.124 "high_priority_weight": 0, 00:15:11.124 "nvme_adminq_poll_period_us": 10000, 00:15:11.124 "nvme_ioq_poll_period_us": 0, 00:15:11.124 "io_queue_requests": 512, 00:15:11.124 "delay_cmd_submit": true, 00:15:11.124 "transport_retry_count": 4, 00:15:11.124 "bdev_retry_count": 3, 00:15:11.124 "transport_ack_timeout": 0, 00:15:11.124 "ctrlr_loss_timeout_sec": 0, 00:15:11.124 "reconnect_delay_sec": 0, 00:15:11.124 "fast_io_fail_timeout_sec": 0, 00:15:11.124 "disable_auto_failback": false, 00:15:11.124 "generate_uuids": false, 00:15:11.124 "transport_tos": 0, 00:15:11.124 "nvme_error_stat": false, 00:15:11.124 "rdma_srq_size": 0, 00:15:11.124 "io_path_stat": false, 00:15:11.124 "allow_accel_sequence": false, 00:15:11.124 "rdma_max_cq_size": 0, 00:15:11.124 "rdma_cm_event_timeout_ms": 0, 00:15:11.124 "dhchap_digests": [ 00:15:11.124 "sha256", 00:15:11.124 "sha384", 00:15:11.124 "sha512" 00:15:11.124 ], 00:15:11.124 "dhchap_dhgroups": [ 00:15:11.124 "null", 00:15:11.124 "ffdhe2048", 00:15:11.124 "ffdhe3072", 00:15:11.124 "ffdhe4096", 00:15:11.124 "ffdhe6144", 00:15:11.124 "ffdhe8192" 00:15:11.124 ] 00:15:11.124 } 00:15:11.124 }, 00:15:11.124 { 00:15:11.124 "method": "bdev_nvme_attach_controller", 00:15:11.124 "params": { 00:15:11.124 "name": "TLSTEST", 00:15:11.124 "trtype": "TCP", 00:15:11.124 "adrfam": "IPv4", 00:15:11.124 "traddr": "10.0.0.3", 00:15:11.124 "trsvcid": "4420", 00:15:11.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.125 "prchk_reftag": false, 00:15:11.125 "prchk_guard": false, 00:15:11.125 "ctrlr_loss_timeout_sec": 0, 00:15:11.125 "reconnect_delay_sec": 0, 00:15:11.125 "fast_io_fail_timeout_sec": 0, 00:15:11.125 "psk": "key0", 00:15:11.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:11.125 "hdgst": false, 00:15:11.125 "ddgst": false, 00:15:11.125 "multipath": "multipath" 00:15:11.125 } 00:15:11.125 }, 00:15:11.125 { 00:15:11.125 "method": "bdev_nvme_set_hotplug", 00:15:11.125 "params": { 00:15:11.125 "period_us": 100000, 00:15:11.125 "enable": false 00:15:11.125 } 00:15:11.125 }, 00:15:11.125 { 00:15:11.125 "method": "bdev_wait_for_examine" 00:15:11.125 } 00:15:11.125 ] 00:15:11.125 }, 00:15:11.125 { 00:15:11.125 "subsystem": "nbd", 00:15:11.125 "config": [] 00:15:11.125 } 00:15:11.125 ] 00:15:11.125 }' 00:15:11.125 [2024-11-19 16:10:17.655291] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:11.125 [2024-11-19 16:10:17.655385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84903 ] 00:15:11.125 [2024-11-19 16:10:17.810384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.125 [2024-11-19 16:10:17.834334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.384 [2024-11-19 16:10:17.948027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:11.384 [2024-11-19 16:10:17.979180] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:12.321 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.321 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:12.321 16:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:12.321 Running I/O for 10 seconds... 00:15:14.218 4050.00 IOPS, 15.82 MiB/s [2024-11-19T16:10:21.870Z] 4119.50 IOPS, 16.09 MiB/s [2024-11-19T16:10:22.806Z] 4174.00 IOPS, 16.30 MiB/s [2024-11-19T16:10:24.183Z] 4160.00 IOPS, 16.25 MiB/s [2024-11-19T16:10:25.120Z] 4196.20 IOPS, 16.39 MiB/s [2024-11-19T16:10:26.058Z] 4196.83 IOPS, 16.39 MiB/s [2024-11-19T16:10:26.995Z] 4204.86 IOPS, 16.43 MiB/s [2024-11-19T16:10:27.939Z] 4187.62 IOPS, 16.36 MiB/s [2024-11-19T16:10:28.877Z] 4192.22 IOPS, 16.38 MiB/s [2024-11-19T16:10:28.877Z] 4197.60 IOPS, 16.40 MiB/s 00:15:22.162 Latency(us) 00:15:22.162 [2024-11-19T16:10:28.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.162 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:22.162 Verification LBA range: start 0x0 length 0x2000 00:15:22.162 TLSTESTn1 : 10.01 4204.29 16.42 0.00 0.00 30394.31 4855.62 30980.65 00:15:22.162 [2024-11-19T16:10:28.877Z] =================================================================================================================== 00:15:22.162 [2024-11-19T16:10:28.877Z] Total : 4204.29 16.42 0.00 0.00 30394.31 4855.62 30980.65 00:15:22.162 { 00:15:22.162 "results": [ 00:15:22.162 { 00:15:22.162 "job": "TLSTESTn1", 00:15:22.162 "core_mask": "0x4", 00:15:22.162 "workload": "verify", 00:15:22.162 "status": "finished", 00:15:22.162 "verify_range": { 00:15:22.162 "start": 0, 00:15:22.162 "length": 8192 00:15:22.162 }, 00:15:22.162 "queue_depth": 128, 00:15:22.162 "io_size": 4096, 00:15:22.162 "runtime": 10.014051, 00:15:22.162 "iops": 4204.2925485400465, 00:15:22.162 "mibps": 16.423017767734557, 00:15:22.162 "io_failed": 0, 00:15:22.162 "io_timeout": 0, 00:15:22.162 "avg_latency_us": 30394.30784976745, 00:15:22.162 "min_latency_us": 4855.6218181818185, 00:15:22.162 "max_latency_us": 30980.654545454545 00:15:22.162 } 00:15:22.162 ], 00:15:22.162 "core_count": 1 00:15:22.162 } 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 84903 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84903 ']' 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84903 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84903 00:15:22.162 killing process with pid 84903 00:15:22.162 Received shutdown signal, test time was about 10.000000 seconds 00:15:22.162 00:15:22.162 Latency(us) 00:15:22.162 [2024-11-19T16:10:28.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.162 [2024-11-19T16:10:28.877Z] =================================================================================================================== 00:15:22.162 [2024-11-19T16:10:28.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84903' 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84903 00:15:22.162 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84903 00:15:22.421 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 84871 00:15:22.421 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 84871 ']' 00:15:22.421 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 84871 00:15:22.421 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:22.421 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.421 16:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84871 00:15:22.421 killing process with pid 84871 00:15:22.421 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:22.421 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:22.421 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84871' 00:15:22.421 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 84871 00:15:22.421 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 84871 00:15:22.421 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:22.421 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:22.421 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:22.421 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:22.679 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85036 00:15:22.679 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:22.679 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85036 00:15:22.679 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85036 ']' 00:15:22.679 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.679 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.679 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.679 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.679 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:22.679 [2024-11-19 16:10:29.199423] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:22.679 [2024-11-19 16:10:29.199754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:22.679 [2024-11-19 16:10:29.356122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.679 [2024-11-19 16:10:29.379114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:22.679 [2024-11-19 16:10:29.379193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:22.679 [2024-11-19 16:10:29.379207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:22.679 [2024-11-19 16:10:29.379217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:22.679 [2024-11-19 16:10:29.379226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:22.679 [2024-11-19 16:10:29.379608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.937 [2024-11-19 16:10:29.412715] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:22.937 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.937 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:22.937 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:22.937 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:22.937 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:22.937 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:22.938 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.tTehi7oCd0 00:15:22.938 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.tTehi7oCd0 00:15:22.938 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:23.196 [2024-11-19 16:10:29.827193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.196 16:10:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:23.454 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:23.713 [2024-11-19 16:10:30.423440] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:23.713 [2024-11-19 16:10:30.423885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:23.972 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:24.231 malloc0 00:15:24.231 16:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:24.489 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.tTehi7oCd0 00:15:24.748 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:25.007 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=85090 00:15:25.007 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:25.007 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:25.007 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 85090 /var/tmp/bdevperf.sock 00:15:25.007 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85090 ']' 00:15:25.007 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:25.007 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.007 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:25.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:25.007 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.007 16:10:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:25.266 [2024-11-19 16:10:31.750998] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:25.266 [2024-11-19 16:10:31.751694] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85090 ] 00:15:25.266 [2024-11-19 16:10:31.899157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.266 [2024-11-19 16:10:31.924674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.266 [2024-11-19 16:10:31.960310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:25.526 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.526 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:25.526 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tTehi7oCd0 00:15:25.785 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:26.044 [2024-11-19 16:10:32.673090] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:26.044 nvme0n1 00:15:26.303 16:10:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:26.303 Running I/O for 1 seconds... 00:15:27.240 3729.00 IOPS, 14.57 MiB/s 00:15:27.240 Latency(us) 00:15:27.240 [2024-11-19T16:10:33.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.240 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.240 Verification LBA range: start 0x0 length 0x2000 00:15:27.240 nvme0n1 : 1.02 3792.12 14.81 0.00 0.00 33453.64 5630.14 27763.43 00:15:27.240 [2024-11-19T16:10:33.955Z] =================================================================================================================== 00:15:27.240 [2024-11-19T16:10:33.955Z] Total : 3792.12 14.81 0.00 0.00 33453.64 5630.14 27763.43 00:15:27.240 { 00:15:27.240 "results": [ 00:15:27.240 { 00:15:27.240 "job": "nvme0n1", 00:15:27.240 "core_mask": "0x2", 00:15:27.240 "workload": "verify", 00:15:27.240 "status": "finished", 00:15:27.240 "verify_range": { 00:15:27.240 "start": 0, 00:15:27.240 "length": 8192 00:15:27.240 }, 00:15:27.240 "queue_depth": 128, 00:15:27.240 "io_size": 4096, 00:15:27.240 "runtime": 1.017372, 00:15:27.240 "iops": 3792.123235158821, 00:15:27.240 "mibps": 14.812981387339144, 00:15:27.240 "io_failed": 0, 00:15:27.240 "io_timeout": 0, 00:15:27.240 "avg_latency_us": 33453.64325934304, 00:15:27.240 "min_latency_us": 5630.138181818182, 00:15:27.240 "max_latency_us": 27763.432727272728 00:15:27.240 } 00:15:27.240 ], 00:15:27.240 "core_count": 1 00:15:27.240 } 00:15:27.240 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 85090 00:15:27.240 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85090 ']' 00:15:27.240 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85090 00:15:27.240 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:27.240 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.240 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85090 00:15:27.500 killing process with pid 85090 00:15:27.500 Received shutdown signal, test time was about 1.000000 seconds 00:15:27.500 00:15:27.500 Latency(us) 00:15:27.500 [2024-11-19T16:10:34.215Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.500 [2024-11-19T16:10:34.215Z] =================================================================================================================== 00:15:27.500 [2024-11-19T16:10:34.215Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.500 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:27.500 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:27.500 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85090' 00:15:27.500 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85090 00:15:27.500 16:10:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85090 00:15:27.500 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 85036 00:15:27.500 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85036 ']' 00:15:27.500 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85036 00:15:27.500 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:27.500 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:27.500 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85036 00:15:27.500 killing process with pid 85036 00:15:27.500 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:27.500 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:27.500 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85036' 00:15:27.500 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85036 00:15:27.500 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85036 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85132 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85132 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85132 ']' 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.759 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.759 [2024-11-19 16:10:34.344600] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:27.759 [2024-11-19 16:10:34.344711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.019 [2024-11-19 16:10:34.496511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.019 [2024-11-19 16:10:34.516306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.019 [2024-11-19 16:10:34.516389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.019 [2024-11-19 16:10:34.516418] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.019 [2024-11-19 16:10:34.516425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.019 [2024-11-19 16:10:34.516431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.019 [2024-11-19 16:10:34.516789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.019 [2024-11-19 16:10:34.547745] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.019 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.019 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:28.019 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:28.019 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:28.019 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.019 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.019 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:28.019 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.019 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.019 [2024-11-19 16:10:34.688859] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.019 malloc0 00:15:28.019 [2024-11-19 16:10:34.714803] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:28.019 [2024-11-19 16:10:34.715160] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:28.278 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.278 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=85158 00:15:28.278 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:28.278 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 85158 /var/tmp/bdevperf.sock 00:15:28.278 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85158 ']' 00:15:28.278 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:28.278 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.278 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:28.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:28.278 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.278 16:10:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:28.278 [2024-11-19 16:10:34.804798] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:28.278 [2024-11-19 16:10:34.805093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85158 ] 00:15:28.278 [2024-11-19 16:10:34.954141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.278 [2024-11-19 16:10:34.977173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.537 [2024-11-19 16:10:35.009104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.537 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.537 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:28.537 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.tTehi7oCd0 00:15:28.796 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:29.054 [2024-11-19 16:10:35.569580] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:29.054 nvme0n1 00:15:29.054 16:10:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:29.054 Running I/O for 1 seconds... 00:15:30.430 4001.00 IOPS, 15.63 MiB/s 00:15:30.430 Latency(us) 00:15:30.430 [2024-11-19T16:10:37.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.430 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:30.430 Verification LBA range: start 0x0 length 0x2000 00:15:30.430 nvme0n1 : 1.02 4058.15 15.85 0.00 0.00 31281.39 2100.13 23116.33 00:15:30.430 [2024-11-19T16:10:37.145Z] =================================================================================================================== 00:15:30.430 [2024-11-19T16:10:37.145Z] Total : 4058.15 15.85 0.00 0.00 31281.39 2100.13 23116.33 00:15:30.430 { 00:15:30.430 "results": [ 00:15:30.430 { 00:15:30.430 "job": "nvme0n1", 00:15:30.430 "core_mask": "0x2", 00:15:30.430 "workload": "verify", 00:15:30.430 "status": "finished", 00:15:30.430 "verify_range": { 00:15:30.430 "start": 0, 00:15:30.430 "length": 8192 00:15:30.430 }, 00:15:30.430 "queue_depth": 128, 00:15:30.430 "io_size": 4096, 00:15:30.430 "runtime": 1.017706, 00:15:30.430 "iops": 4058.1464588004787, 00:15:30.430 "mibps": 15.85213460468937, 00:15:30.430 "io_failed": 0, 00:15:30.431 "io_timeout": 0, 00:15:30.431 "avg_latency_us": 31281.387319392474, 00:15:30.431 "min_latency_us": 2100.130909090909, 00:15:30.431 "max_latency_us": 23116.334545454545 00:15:30.431 } 00:15:30.431 ], 00:15:30.431 "core_count": 1 00:15:30.431 } 00:15:30.431 16:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:30.431 16:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.431 16:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.431 16:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.431 16:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:30.431 "subsystems": [ 00:15:30.431 { 00:15:30.431 "subsystem": "keyring", 00:15:30.431 "config": [ 00:15:30.431 { 00:15:30.431 "method": "keyring_file_add_key", 00:15:30.431 "params": { 00:15:30.431 "name": "key0", 00:15:30.431 "path": "/tmp/tmp.tTehi7oCd0" 00:15:30.431 } 00:15:30.431 } 00:15:30.431 ] 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "subsystem": "iobuf", 00:15:30.431 "config": [ 00:15:30.431 { 00:15:30.431 "method": "iobuf_set_options", 00:15:30.431 "params": { 00:15:30.431 "small_pool_count": 8192, 00:15:30.431 "large_pool_count": 1024, 00:15:30.431 "small_bufsize": 8192, 00:15:30.431 "large_bufsize": 135168, 00:15:30.431 "enable_numa": false 00:15:30.431 } 00:15:30.431 } 00:15:30.431 ] 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "subsystem": "sock", 00:15:30.431 "config": [ 00:15:30.431 { 00:15:30.431 "method": "sock_set_default_impl", 00:15:30.431 "params": { 00:15:30.431 "impl_name": "uring" 00:15:30.431 } 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "method": "sock_impl_set_options", 00:15:30.431 "params": { 00:15:30.431 "impl_name": "ssl", 00:15:30.431 "recv_buf_size": 4096, 00:15:30.431 "send_buf_size": 4096, 00:15:30.431 "enable_recv_pipe": true, 00:15:30.431 "enable_quickack": false, 00:15:30.431 "enable_placement_id": 0, 00:15:30.431 "enable_zerocopy_send_server": true, 00:15:30.431 "enable_zerocopy_send_client": false, 00:15:30.431 "zerocopy_threshold": 0, 00:15:30.431 "tls_version": 0, 00:15:30.431 "enable_ktls": false 00:15:30.431 } 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "method": "sock_impl_set_options", 00:15:30.431 "params": { 00:15:30.431 "impl_name": "posix", 00:15:30.431 "recv_buf_size": 2097152, 00:15:30.431 "send_buf_size": 2097152, 00:15:30.431 "enable_recv_pipe": true, 00:15:30.431 "enable_quickack": false, 00:15:30.431 "enable_placement_id": 0, 00:15:30.431 "enable_zerocopy_send_server": true, 00:15:30.431 "enable_zerocopy_send_client": false, 00:15:30.431 "zerocopy_threshold": 0, 00:15:30.431 "tls_version": 0, 00:15:30.431 "enable_ktls": false 00:15:30.431 } 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "method": "sock_impl_set_options", 00:15:30.431 "params": { 00:15:30.431 "impl_name": "uring", 00:15:30.431 "recv_buf_size": 2097152, 00:15:30.431 "send_buf_size": 2097152, 00:15:30.431 "enable_recv_pipe": true, 00:15:30.431 "enable_quickack": false, 00:15:30.431 "enable_placement_id": 0, 00:15:30.431 "enable_zerocopy_send_server": false, 00:15:30.431 "enable_zerocopy_send_client": false, 00:15:30.431 "zerocopy_threshold": 0, 00:15:30.431 "tls_version": 0, 00:15:30.431 "enable_ktls": false 00:15:30.431 } 00:15:30.431 } 00:15:30.431 ] 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "subsystem": "vmd", 00:15:30.431 "config": [] 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "subsystem": "accel", 00:15:30.431 "config": [ 00:15:30.431 { 00:15:30.431 "method": "accel_set_options", 00:15:30.431 "params": { 00:15:30.431 "small_cache_size": 128, 00:15:30.431 "large_cache_size": 16, 00:15:30.431 "task_count": 2048, 00:15:30.431 "sequence_count": 2048, 00:15:30.431 "buf_count": 2048 00:15:30.431 } 00:15:30.431 } 00:15:30.431 ] 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "subsystem": "bdev", 00:15:30.431 "config": [ 00:15:30.431 { 00:15:30.431 "method": "bdev_set_options", 00:15:30.431 "params": { 00:15:30.431 "bdev_io_pool_size": 65535, 00:15:30.431 "bdev_io_cache_size": 256, 00:15:30.431 "bdev_auto_examine": true, 00:15:30.431 "iobuf_small_cache_size": 128, 00:15:30.431 "iobuf_large_cache_size": 16 00:15:30.431 } 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "method": "bdev_raid_set_options", 00:15:30.431 "params": { 00:15:30.431 "process_window_size_kb": 1024, 00:15:30.431 "process_max_bandwidth_mb_sec": 0 00:15:30.431 } 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "method": "bdev_iscsi_set_options", 00:15:30.431 "params": { 00:15:30.431 "timeout_sec": 30 00:15:30.431 } 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "method": "bdev_nvme_set_options", 00:15:30.431 "params": { 00:15:30.431 "action_on_timeout": "none", 00:15:30.431 "timeout_us": 0, 00:15:30.431 "timeout_admin_us": 0, 00:15:30.431 "keep_alive_timeout_ms": 10000, 00:15:30.431 "arbitration_burst": 0, 00:15:30.431 "low_priority_weight": 0, 00:15:30.431 "medium_priority_weight": 0, 00:15:30.431 "high_priority_weight": 0, 00:15:30.431 "nvme_adminq_poll_period_us": 10000, 00:15:30.431 "nvme_ioq_poll_period_us": 0, 00:15:30.431 "io_queue_requests": 0, 00:15:30.431 "delay_cmd_submit": true, 00:15:30.431 "transport_retry_count": 4, 00:15:30.431 "bdev_retry_count": 3, 00:15:30.431 "transport_ack_timeout": 0, 00:15:30.431 "ctrlr_loss_timeout_sec": 0, 00:15:30.431 "reconnect_delay_sec": 0, 00:15:30.431 "fast_io_fail_timeout_sec": 0, 00:15:30.431 "disable_auto_failback": false, 00:15:30.431 "generate_uuids": false, 00:15:30.431 "transport_tos": 0, 00:15:30.431 "nvme_error_stat": false, 00:15:30.431 "rdma_srq_size": 0, 00:15:30.431 "io_path_stat": false, 00:15:30.431 "allow_accel_sequence": false, 00:15:30.431 "rdma_max_cq_size": 0, 00:15:30.431 "rdma_cm_event_timeout_ms": 0, 00:15:30.431 "dhchap_digests": [ 00:15:30.431 "sha256", 00:15:30.431 "sha384", 00:15:30.431 "sha512" 00:15:30.431 ], 00:15:30.431 "dhchap_dhgroups": [ 00:15:30.431 "null", 00:15:30.431 "ffdhe2048", 00:15:30.431 "ffdhe3072", 00:15:30.431 "ffdhe4096", 00:15:30.431 "ffdhe6144", 00:15:30.431 "ffdhe8192" 00:15:30.431 ] 00:15:30.431 } 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "method": "bdev_nvme_set_hotplug", 00:15:30.431 "params": { 00:15:30.431 "period_us": 100000, 00:15:30.431 "enable": false 00:15:30.431 } 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "method": "bdev_malloc_create", 00:15:30.431 "params": { 00:15:30.431 "name": "malloc0", 00:15:30.431 "num_blocks": 8192, 00:15:30.431 "block_size": 4096, 00:15:30.431 "physical_block_size": 4096, 00:15:30.431 "uuid": "c5655ee4-bd99-481e-9202-e30cca4e6df5", 00:15:30.431 "optimal_io_boundary": 0, 00:15:30.431 "md_size": 0, 00:15:30.431 "dif_type": 0, 00:15:30.431 "dif_is_head_of_md": false, 00:15:30.431 "dif_pi_format": 0 00:15:30.431 } 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "method": "bdev_wait_for_examine" 00:15:30.431 } 00:15:30.431 ] 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "subsystem": "nbd", 00:15:30.431 "config": [] 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "subsystem": "scheduler", 00:15:30.431 "config": [ 00:15:30.431 { 00:15:30.431 "method": "framework_set_scheduler", 00:15:30.431 "params": { 00:15:30.431 "name": "static" 00:15:30.431 } 00:15:30.431 } 00:15:30.431 ] 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "subsystem": "nvmf", 00:15:30.431 "config": [ 00:15:30.431 { 00:15:30.431 "method": "nvmf_set_config", 00:15:30.431 "params": { 00:15:30.431 "discovery_filter": "match_any", 00:15:30.431 "admin_cmd_passthru": { 00:15:30.431 "identify_ctrlr": false 00:15:30.431 }, 00:15:30.431 "dhchap_digests": [ 00:15:30.431 "sha256", 00:15:30.431 "sha384", 00:15:30.431 "sha512" 00:15:30.431 ], 00:15:30.431 "dhchap_dhgroups": [ 00:15:30.431 "null", 00:15:30.431 "ffdhe2048", 00:15:30.431 "ffdhe3072", 00:15:30.431 "ffdhe4096", 00:15:30.431 "ffdhe6144", 00:15:30.431 "ffdhe8192" 00:15:30.431 ] 00:15:30.431 } 00:15:30.431 }, 00:15:30.431 { 00:15:30.431 "method": "nvmf_set_max_subsystems", 00:15:30.431 "params": { 00:15:30.431 "max_subsystems": 1024 00:15:30.431 } 00:15:30.431 }, 00:15:30.431 { 00:15:30.432 "method": "nvmf_set_crdt", 00:15:30.432 "params": { 00:15:30.432 "crdt1": 0, 00:15:30.432 "crdt2": 0, 00:15:30.432 "crdt3": 0 00:15:30.432 } 00:15:30.432 }, 00:15:30.432 { 00:15:30.432 "method": "nvmf_create_transport", 00:15:30.432 "params": { 00:15:30.432 "trtype": "TCP", 00:15:30.432 "max_queue_depth": 128, 00:15:30.432 "max_io_qpairs_per_ctrlr": 127, 00:15:30.432 "in_capsule_data_size": 4096, 00:15:30.432 "max_io_size": 131072, 00:15:30.432 "io_unit_size": 131072, 00:15:30.432 "max_aq_depth": 128, 00:15:30.432 "num_shared_buffers": 511, 00:15:30.432 "buf_cache_size": 4294967295, 00:15:30.432 "dif_insert_or_strip": false, 00:15:30.432 "zcopy": false, 00:15:30.432 "c2h_success": false, 00:15:30.432 "sock_priority": 0, 00:15:30.432 "abort_timeout_sec": 1, 00:15:30.432 "ack_timeout": 0, 00:15:30.432 "data_wr_pool_size": 0 00:15:30.432 } 00:15:30.432 }, 00:15:30.432 { 00:15:30.432 "method": "nvmf_create_subsystem", 00:15:30.432 "params": { 00:15:30.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.432 "allow_any_host": false, 00:15:30.432 "serial_number": "00000000000000000000", 00:15:30.432 "model_number": "SPDK bdev Controller", 00:15:30.432 "max_namespaces": 32, 00:15:30.432 "min_cntlid": 1, 00:15:30.432 "max_cntlid": 65519, 00:15:30.432 "ana_reporting": false 00:15:30.432 } 00:15:30.432 }, 00:15:30.432 { 00:15:30.432 "method": "nvmf_subsystem_add_host", 00:15:30.432 "params": { 00:15:30.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.432 "host": "nqn.2016-06.io.spdk:host1", 00:15:30.432 "psk": "key0" 00:15:30.432 } 00:15:30.432 }, 00:15:30.432 { 00:15:30.432 "method": "nvmf_subsystem_add_ns", 00:15:30.432 "params": { 00:15:30.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.432 "namespace": { 00:15:30.432 "nsid": 1, 00:15:30.432 "bdev_name": "malloc0", 00:15:30.432 "nguid": "C5655EE4BD99481E9202E30CCA4E6DF5", 00:15:30.432 "uuid": "c5655ee4-bd99-481e-9202-e30cca4e6df5", 00:15:30.432 "no_auto_visible": false 00:15:30.432 } 00:15:30.432 } 00:15:30.432 }, 00:15:30.432 { 00:15:30.432 "method": "nvmf_subsystem_add_listener", 00:15:30.432 "params": { 00:15:30.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.432 "listen_address": { 00:15:30.432 "trtype": "TCP", 00:15:30.432 "adrfam": "IPv4", 00:15:30.432 "traddr": "10.0.0.3", 00:15:30.432 "trsvcid": "4420" 00:15:30.432 }, 00:15:30.432 "secure_channel": false, 00:15:30.432 "sock_impl": "ssl" 00:15:30.432 } 00:15:30.432 } 00:15:30.432 ] 00:15:30.432 } 00:15:30.432 ] 00:15:30.432 }' 00:15:30.432 16:10:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:30.692 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:30.692 "subsystems": [ 00:15:30.692 { 00:15:30.692 "subsystem": "keyring", 00:15:30.692 "config": [ 00:15:30.692 { 00:15:30.692 "method": "keyring_file_add_key", 00:15:30.692 "params": { 00:15:30.692 "name": "key0", 00:15:30.692 "path": "/tmp/tmp.tTehi7oCd0" 00:15:30.692 } 00:15:30.692 } 00:15:30.692 ] 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "subsystem": "iobuf", 00:15:30.692 "config": [ 00:15:30.692 { 00:15:30.692 "method": "iobuf_set_options", 00:15:30.692 "params": { 00:15:30.692 "small_pool_count": 8192, 00:15:30.692 "large_pool_count": 1024, 00:15:30.692 "small_bufsize": 8192, 00:15:30.692 "large_bufsize": 135168, 00:15:30.692 "enable_numa": false 00:15:30.692 } 00:15:30.692 } 00:15:30.692 ] 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "subsystem": "sock", 00:15:30.692 "config": [ 00:15:30.692 { 00:15:30.692 "method": "sock_set_default_impl", 00:15:30.692 "params": { 00:15:30.692 "impl_name": "uring" 00:15:30.692 } 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "method": "sock_impl_set_options", 00:15:30.692 "params": { 00:15:30.692 "impl_name": "ssl", 00:15:30.692 "recv_buf_size": 4096, 00:15:30.692 "send_buf_size": 4096, 00:15:30.692 "enable_recv_pipe": true, 00:15:30.692 "enable_quickack": false, 00:15:30.692 "enable_placement_id": 0, 00:15:30.692 "enable_zerocopy_send_server": true, 00:15:30.692 "enable_zerocopy_send_client": false, 00:15:30.692 "zerocopy_threshold": 0, 00:15:30.692 "tls_version": 0, 00:15:30.692 "enable_ktls": false 00:15:30.692 } 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "method": "sock_impl_set_options", 00:15:30.692 "params": { 00:15:30.692 "impl_name": "posix", 00:15:30.692 "recv_buf_size": 2097152, 00:15:30.692 "send_buf_size": 2097152, 00:15:30.692 "enable_recv_pipe": true, 00:15:30.692 "enable_quickack": false, 00:15:30.692 "enable_placement_id": 0, 00:15:30.692 "enable_zerocopy_send_server": true, 00:15:30.692 "enable_zerocopy_send_client": false, 00:15:30.692 "zerocopy_threshold": 0, 00:15:30.692 "tls_version": 0, 00:15:30.692 "enable_ktls": false 00:15:30.692 } 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "method": "sock_impl_set_options", 00:15:30.692 "params": { 00:15:30.692 "impl_name": "uring", 00:15:30.692 "recv_buf_size": 2097152, 00:15:30.692 "send_buf_size": 2097152, 00:15:30.692 "enable_recv_pipe": true, 00:15:30.692 "enable_quickack": false, 00:15:30.692 "enable_placement_id": 0, 00:15:30.692 "enable_zerocopy_send_server": false, 00:15:30.692 "enable_zerocopy_send_client": false, 00:15:30.692 "zerocopy_threshold": 0, 00:15:30.692 "tls_version": 0, 00:15:30.692 "enable_ktls": false 00:15:30.692 } 00:15:30.692 } 00:15:30.692 ] 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "subsystem": "vmd", 00:15:30.692 "config": [] 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "subsystem": "accel", 00:15:30.692 "config": [ 00:15:30.692 { 00:15:30.692 "method": "accel_set_options", 00:15:30.692 "params": { 00:15:30.692 "small_cache_size": 128, 00:15:30.692 "large_cache_size": 16, 00:15:30.692 "task_count": 2048, 00:15:30.692 "sequence_count": 2048, 00:15:30.692 "buf_count": 2048 00:15:30.692 } 00:15:30.692 } 00:15:30.692 ] 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "subsystem": "bdev", 00:15:30.692 "config": [ 00:15:30.692 { 00:15:30.692 "method": "bdev_set_options", 00:15:30.692 "params": { 00:15:30.692 "bdev_io_pool_size": 65535, 00:15:30.692 "bdev_io_cache_size": 256, 00:15:30.692 "bdev_auto_examine": true, 00:15:30.692 "iobuf_small_cache_size": 128, 00:15:30.692 "iobuf_large_cache_size": 16 00:15:30.692 } 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "method": "bdev_raid_set_options", 00:15:30.692 "params": { 00:15:30.692 "process_window_size_kb": 1024, 00:15:30.692 "process_max_bandwidth_mb_sec": 0 00:15:30.692 } 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "method": "bdev_iscsi_set_options", 00:15:30.692 "params": { 00:15:30.692 "timeout_sec": 30 00:15:30.692 } 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "method": "bdev_nvme_set_options", 00:15:30.692 "params": { 00:15:30.692 "action_on_timeout": "none", 00:15:30.692 "timeout_us": 0, 00:15:30.692 "timeout_admin_us": 0, 00:15:30.692 "keep_alive_timeout_ms": 10000, 00:15:30.692 "arbitration_burst": 0, 00:15:30.692 "low_priority_weight": 0, 00:15:30.692 "medium_priority_weight": 0, 00:15:30.692 "high_priority_weight": 0, 00:15:30.692 "nvme_adminq_poll_period_us": 10000, 00:15:30.692 "nvme_ioq_poll_period_us": 0, 00:15:30.692 "io_queue_requests": 512, 00:15:30.692 "delay_cmd_submit": true, 00:15:30.692 "transport_retry_count": 4, 00:15:30.692 "bdev_retry_count": 3, 00:15:30.692 "transport_ack_timeout": 0, 00:15:30.692 "ctrlr_loss_timeout_sec": 0, 00:15:30.692 "reconnect_delay_sec": 0, 00:15:30.692 "fast_io_fail_timeout_sec": 0, 00:15:30.692 "disable_auto_failback": false, 00:15:30.692 "generate_uuids": false, 00:15:30.692 "transport_tos": 0, 00:15:30.692 "nvme_error_stat": false, 00:15:30.692 "rdma_srq_size": 0, 00:15:30.692 "io_path_stat": false, 00:15:30.692 "allow_accel_sequence": false, 00:15:30.692 "rdma_max_cq_size": 0, 00:15:30.692 "rdma_cm_event_timeout_ms": 0, 00:15:30.692 "dhchap_digests": [ 00:15:30.692 "sha256", 00:15:30.692 "sha384", 00:15:30.692 "sha512" 00:15:30.692 ], 00:15:30.692 "dhchap_dhgroups": [ 00:15:30.692 "null", 00:15:30.692 "ffdhe2048", 00:15:30.692 "ffdhe3072", 00:15:30.692 "ffdhe4096", 00:15:30.692 "ffdhe6144", 00:15:30.692 "ffdhe8192" 00:15:30.692 ] 00:15:30.692 } 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "method": "bdev_nvme_attach_controller", 00:15:30.692 "params": { 00:15:30.692 "name": "nvme0", 00:15:30.692 "trtype": "TCP", 00:15:30.692 "adrfam": "IPv4", 00:15:30.692 "traddr": "10.0.0.3", 00:15:30.692 "trsvcid": "4420", 00:15:30.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.692 "prchk_reftag": false, 00:15:30.692 "prchk_guard": false, 00:15:30.692 "ctrlr_loss_timeout_sec": 0, 00:15:30.692 "reconnect_delay_sec": 0, 00:15:30.692 "fast_io_fail_timeout_sec": 0, 00:15:30.692 "psk": "key0", 00:15:30.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:30.692 "hdgst": false, 00:15:30.692 "ddgst": false, 00:15:30.692 "multipath": "multipath" 00:15:30.692 } 00:15:30.692 }, 00:15:30.692 { 00:15:30.692 "method": "bdev_nvme_set_hotplug", 00:15:30.693 "params": { 00:15:30.693 "period_us": 100000, 00:15:30.693 "enable": false 00:15:30.693 } 00:15:30.693 }, 00:15:30.693 { 00:15:30.693 "method": "bdev_enable_histogram", 00:15:30.693 "params": { 00:15:30.693 "name": "nvme0n1", 00:15:30.693 "enable": true 00:15:30.693 } 00:15:30.693 }, 00:15:30.693 { 00:15:30.693 "method": "bdev_wait_for_examine" 00:15:30.693 } 00:15:30.693 ] 00:15:30.693 }, 00:15:30.693 { 00:15:30.693 "subsystem": "nbd", 00:15:30.693 "config": [] 00:15:30.693 } 00:15:30.693 ] 00:15:30.693 }' 00:15:30.693 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 85158 00:15:30.693 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85158 ']' 00:15:30.693 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85158 00:15:30.693 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:30.693 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.693 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85158 00:15:30.693 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:30.693 killing process with pid 85158 00:15:30.693 Received shutdown signal, test time was about 1.000000 seconds 00:15:30.693 00:15:30.693 Latency(us) 00:15:30.693 [2024-11-19T16:10:37.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.693 [2024-11-19T16:10:37.408Z] =================================================================================================================== 00:15:30.693 [2024-11-19T16:10:37.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:30.693 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:30.693 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85158' 00:15:30.693 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85158 00:15:30.693 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85158 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 85132 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85132 ']' 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85132 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85132 00:15:30.953 killing process with pid 85132 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85132' 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85132 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85132 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:30.953 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:30.953 "subsystems": [ 00:15:30.953 { 00:15:30.953 "subsystem": "keyring", 00:15:30.953 "config": [ 00:15:30.953 { 00:15:30.953 "method": "keyring_file_add_key", 00:15:30.953 "params": { 00:15:30.953 "name": "key0", 00:15:30.953 "path": "/tmp/tmp.tTehi7oCd0" 00:15:30.953 } 00:15:30.953 } 00:15:30.953 ] 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "subsystem": "iobuf", 00:15:30.953 "config": [ 00:15:30.953 { 00:15:30.953 "method": "iobuf_set_options", 00:15:30.953 "params": { 00:15:30.953 "small_pool_count": 8192, 00:15:30.953 "large_pool_count": 1024, 00:15:30.953 "small_bufsize": 8192, 00:15:30.953 "large_bufsize": 135168, 00:15:30.953 "enable_numa": false 00:15:30.953 } 00:15:30.953 } 00:15:30.953 ] 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "subsystem": "sock", 00:15:30.953 "config": [ 00:15:30.953 { 00:15:30.953 "method": "sock_set_default_impl", 00:15:30.953 "params": { 00:15:30.953 "impl_name": "uring" 00:15:30.953 } 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "method": "sock_impl_set_options", 00:15:30.953 "params": { 00:15:30.953 "impl_name": "ssl", 00:15:30.953 "recv_buf_size": 4096, 00:15:30.953 "send_buf_size": 4096, 00:15:30.953 "enable_recv_pipe": true, 00:15:30.953 "enable_quickack": false, 00:15:30.953 "enable_placement_id": 0, 00:15:30.953 "enable_zerocopy_send_server": true, 00:15:30.953 "enable_zerocopy_send_client": false, 00:15:30.953 "zerocopy_threshold": 0, 00:15:30.953 "tls_version": 0, 00:15:30.953 "enable_ktls": false 00:15:30.953 } 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "method": "sock_impl_set_options", 00:15:30.953 "params": { 00:15:30.953 "impl_name": "posix", 00:15:30.953 "recv_buf_size": 2097152, 00:15:30.953 "send_buf_size": 2097152, 00:15:30.953 "enable_recv_pipe": true, 00:15:30.953 "enable_quickack": false, 00:15:30.953 "enable_placement_id": 0, 00:15:30.953 "enable_zerocopy_send_server": true, 00:15:30.953 "enable_zerocopy_send_client": false, 00:15:30.953 "zerocopy_threshold": 0, 00:15:30.953 "tls_version": 0, 00:15:30.953 "enable_ktls": false 00:15:30.953 } 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "method": "sock_impl_set_options", 00:15:30.953 "params": { 00:15:30.953 "impl_name": "uring", 00:15:30.953 "recv_buf_size": 2097152, 00:15:30.953 "send_buf_size": 2097152, 00:15:30.953 "enable_recv_pipe": true, 00:15:30.953 "enable_quickack": false, 00:15:30.953 "enable_placement_id": 0, 00:15:30.953 "enable_zerocopy_send_server": false, 00:15:30.953 "enable_zerocopy_send_client": false, 00:15:30.953 "zerocopy_threshold": 0, 00:15:30.953 "tls_version": 0, 00:15:30.953 "enable_ktls": false 00:15:30.953 } 00:15:30.953 } 00:15:30.953 ] 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "subsystem": "vmd", 00:15:30.953 "config": [] 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "subsystem": "accel", 00:15:30.953 "config": [ 00:15:30.953 { 00:15:30.953 "method": "accel_set_options", 00:15:30.953 "params": { 00:15:30.953 "small_cache_size": 128, 00:15:30.953 "large_cache_size": 16, 00:15:30.953 "task_count": 2048, 00:15:30.953 "sequence_count": 2048, 00:15:30.953 "buf_count": 2048 00:15:30.953 } 00:15:30.953 } 00:15:30.953 ] 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "subsystem": "bdev", 00:15:30.953 "config": [ 00:15:30.953 { 00:15:30.953 "method": "bdev_set_options", 00:15:30.953 "params": { 00:15:30.953 "bdev_io_pool_size": 65535, 00:15:30.953 "bdev_io_cache_size": 256, 00:15:30.953 "bdev_auto_examine": true, 00:15:30.953 "iobuf_small_cache_size": 128, 00:15:30.953 "iobuf_large_cache_size": 16 00:15:30.953 } 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "method": "bdev_raid_set_options", 00:15:30.953 "params": { 00:15:30.953 "process_window_size_kb": 1024, 00:15:30.953 "process_max_bandwidth_mb_sec": 0 00:15:30.953 } 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "method": "bdev_iscsi_set_options", 00:15:30.953 "params": { 00:15:30.953 "timeout_sec": 30 00:15:30.953 } 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "method": "bdev_nvme_set_options", 00:15:30.953 "params": { 00:15:30.953 "action_on_timeout": "none", 00:15:30.953 "timeout_us": 0, 00:15:30.953 "timeout_admin_us": 0, 00:15:30.953 "keep_alive_timeout_ms": 10000, 00:15:30.953 "arbitration_burst": 0, 00:15:30.953 "low_priority_weight": 0, 00:15:30.953 "medium_priority_weight": 0, 00:15:30.953 "high_priority_weight": 0, 00:15:30.953 "nvme_adminq_poll_period_us": 10000, 00:15:30.953 "nvme_ioq_poll_period_us": 0, 00:15:30.953 "io_queue_requests": 0, 00:15:30.953 "delay_cmd_submit": true, 00:15:30.953 "transport_retry_count": 4, 00:15:30.953 "bdev_retry_count": 3, 00:15:30.953 "transport_ack_timeout": 0, 00:15:30.953 "ctrlr_loss_timeout_sec": 0, 00:15:30.953 "reconnect_delay_sec": 0, 00:15:30.953 "fast_io_fail_timeout_sec": 0, 00:15:30.953 "disable_auto_failback": false, 00:15:30.953 "generate_uuids": false, 00:15:30.953 "transport_tos": 0, 00:15:30.953 "nvme_error_stat": false, 00:15:30.953 "rdma_srq_size": 0, 00:15:30.953 "io_path_stat": false, 00:15:30.953 "allow_accel_sequence": false, 00:15:30.953 "rdma_max_cq_size": 0, 00:15:30.953 "rdma_cm_event_timeout_ms": 0, 00:15:30.953 "dhchap_digests": [ 00:15:30.953 "sha256", 00:15:30.953 "sha384", 00:15:30.953 "sha512" 00:15:30.953 ], 00:15:30.953 "dhchap_dhgroups": [ 00:15:30.953 "null", 00:15:30.953 "ffdhe2048", 00:15:30.953 "ffdhe3072", 00:15:30.953 "ffdhe4096", 00:15:30.953 "ffdhe6144", 00:15:30.953 "ffdhe8192" 00:15:30.953 ] 00:15:30.953 } 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "method": "bdev_nvme_set_hotplug", 00:15:30.953 "params": { 00:15:30.953 "period_us": 100000, 00:15:30.953 "enable": false 00:15:30.953 } 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "method": "bdev_malloc_create", 00:15:30.953 "params": { 00:15:30.953 "name": "malloc0", 00:15:30.953 "num_blocks": 8192, 00:15:30.953 "block_size": 4096, 00:15:30.953 "physical_block_size": 4096, 00:15:30.953 "uuid": "c5655ee4-bd99-481e-9202-e30cca4e6df5", 00:15:30.953 "optimal_io_boundary": 0, 00:15:30.953 "md_size": 0, 00:15:30.953 "dif_type": 0, 00:15:30.953 "dif_is_head_of_md": false, 00:15:30.953 "dif_pi_format": 0 00:15:30.953 } 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "method": "bdev_wait_for_examine" 00:15:30.953 } 00:15:30.953 ] 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "subsystem": "nbd", 00:15:30.953 "config": [] 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "subsystem": "scheduler", 00:15:30.953 "config": [ 00:15:30.953 { 00:15:30.953 "method": "framework_set_scheduler", 00:15:30.953 "params": { 00:15:30.953 "name": "static" 00:15:30.953 } 00:15:30.953 } 00:15:30.953 ] 00:15:30.953 }, 00:15:30.953 { 00:15:30.953 "subsystem": "nvmf", 00:15:30.953 "config": [ 00:15:30.953 { 00:15:30.953 "method": "nvmf_set_config", 00:15:30.953 "params": { 00:15:30.953 "discovery_filter": "match_any", 00:15:30.953 "admin_cmd_passthru": { 00:15:30.953 "identify_ctrlr": false 00:15:30.953 }, 00:15:30.953 "dhchap_digests": [ 00:15:30.953 "sha256", 00:15:30.953 "sha384", 00:15:30.953 "sha512" 00:15:30.953 ], 00:15:30.953 "dhchap_dhgroups": [ 00:15:30.953 "null", 00:15:30.953 "ffdhe2048", 00:15:30.954 "ffdhe3072", 00:15:30.954 "ffdhe4096", 00:15:30.954 "ffdhe6144", 00:15:30.954 "ffdhe8192" 00:15:30.954 ] 00:15:30.954 } 00:15:30.954 }, 00:15:30.954 { 00:15:30.954 "method": "nvmf_set_max_subsystems", 00:15:30.954 "params": { 00:15:30.954 "max_subsystems": 1024 00:15:30.954 } 00:15:30.954 }, 00:15:30.954 { 00:15:30.954 "method": "nvmf_set_crdt", 00:15:30.954 "params": { 00:15:30.954 "crdt1": 0, 00:15:30.954 "crdt2": 0, 00:15:30.954 "crdt3": 0 00:15:30.954 } 00:15:30.954 }, 00:15:30.954 { 00:15:30.954 "method": "nvmf_create_transport", 00:15:30.954 "params": { 00:15:30.954 "trtype": "TCP", 00:15:30.954 "max_queue_depth": 128, 00:15:30.954 "max_io_qpairs_per_ctrlr": 127, 00:15:30.954 "in_capsule_data_size": 4096, 00:15:30.954 "max_io_size": 131072, 00:15:30.954 "io_unit_size": 131072, 00:15:30.954 "max_aq_depth": 128, 00:15:30.954 "num_shared_buffers": 511, 00:15:30.954 "buf_cache_size": 4294967295, 00:15:30.954 "dif_insert_or_strip": false, 00:15:30.954 "zcopy": false, 00:15:30.954 "c2h_success": false, 00:15:30.954 "sock_priority": 0, 00:15:30.954 "abort_timeout_sec": 1, 00:15:30.954 "ack_timeout": 0, 00:15:30.954 "data_wr_pool_size": 0 00:15:30.954 } 00:15:30.954 }, 00:15:30.954 { 00:15:30.954 "method": "nvmf_create_subsystem", 00:15:30.954 "params": { 00:15:30.954 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.954 "allow_any_host": false, 00:15:30.954 "serial_number": "00000000000000000000", 00:15:30.954 "model_number": "SPDK bdev Controller", 00:15:30.954 "max_namespaces": 32, 00:15:30.954 "min_cntlid": 1, 00:15:30.954 "max_cntlid": 65519, 00:15:30.954 "ana_reporting": false 00:15:30.954 } 00:15:30.954 }, 00:15:30.954 { 00:15:30.954 "method": "nvmf_subsystem_add_host", 00:15:30.954 "params": { 00:15:30.954 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.954 "host": "nqn.2016-06.io.spdk:host1", 00:15:30.954 "psk": "key0" 00:15:30.954 } 00:15:30.954 }, 00:15:30.954 { 00:15:30.954 "method": "nvmf_subsystem_add_ns", 00:15:30.954 "params": { 00:15:30.954 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.954 "namespace": { 00:15:30.954 "nsid": 1, 00:15:30.954 "bdev_name": "malloc0", 00:15:30.954 "nguid": "C5655EE4BD99481E9202E30CCA4E6DF5", 00:15:30.954 "uuid": "c5655ee4-bd99-481e-9202-e30cca4e6df5", 00:15:30.954 "no_auto_visible": false 00:15:30.954 } 00:15:30.954 } 00:15:30.954 }, 00:15:30.954 { 00:15:30.954 "method": "nvmf_subsystem_add_listener", 00:15:30.954 "params": { 00:15:30.954 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:30.954 "listen_address": { 00:15:30.954 "trtype": "TCP", 00:15:30.954 "adrfam": "IPv4", 00:15:30.954 "traddr": "10.0.0.3", 00:15:30.954 "trsvcid": "4420" 00:15:30.954 }, 00:15:30.954 "secure_channel": false, 00:15:30.954 "sock_impl": "ssl" 00:15:30.954 } 00:15:30.954 } 00:15:30.954 ] 00:15:30.954 } 00:15:30.954 ] 00:15:30.954 }' 00:15:30.954 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:30.954 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=85201 00:15:30.954 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:30.954 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 85201 00:15:30.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.954 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85201 ']' 00:15:30.954 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.954 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:30.954 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.954 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:30.954 16:10:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:31.213 [2024-11-19 16:10:37.699168] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:31.213 [2024-11-19 16:10:37.699326] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.213 [2024-11-19 16:10:37.855499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.213 [2024-11-19 16:10:37.877815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.213 [2024-11-19 16:10:37.878131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.213 [2024-11-19 16:10:37.878166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.213 [2024-11-19 16:10:37.878176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.213 [2024-11-19 16:10:37.878186] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.213 [2024-11-19 16:10:37.878616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.472 [2024-11-19 16:10:38.025740] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:31.472 [2024-11-19 16:10:38.084581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:31.472 [2024-11-19 16:10:38.116519] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:31.472 [2024-11-19 16:10:38.116768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=85242 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 85242 /var/tmp/bdevperf.sock 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 85242 ']' 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.409 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:32.409 "subsystems": [ 00:15:32.409 { 00:15:32.409 "subsystem": "keyring", 00:15:32.409 "config": [ 00:15:32.409 { 00:15:32.409 "method": "keyring_file_add_key", 00:15:32.409 "params": { 00:15:32.409 "name": "key0", 00:15:32.409 "path": "/tmp/tmp.tTehi7oCd0" 00:15:32.409 } 00:15:32.409 } 00:15:32.409 ] 00:15:32.409 }, 00:15:32.409 { 00:15:32.409 "subsystem": "iobuf", 00:15:32.409 "config": [ 00:15:32.409 { 00:15:32.409 "method": "iobuf_set_options", 00:15:32.409 "params": { 00:15:32.409 "small_pool_count": 8192, 00:15:32.409 "large_pool_count": 1024, 00:15:32.409 "small_bufsize": 8192, 00:15:32.409 "large_bufsize": 135168, 00:15:32.409 "enable_numa": false 00:15:32.409 } 00:15:32.409 } 00:15:32.409 ] 00:15:32.409 }, 00:15:32.409 { 00:15:32.409 "subsystem": "sock", 00:15:32.409 "config": [ 00:15:32.409 { 00:15:32.409 "method": "sock_set_default_impl", 00:15:32.409 "params": { 00:15:32.409 "impl_name": "uring" 00:15:32.409 } 00:15:32.409 }, 00:15:32.409 { 00:15:32.409 "method": "sock_impl_set_options", 00:15:32.409 "params": { 00:15:32.409 "impl_name": "ssl", 00:15:32.409 "recv_buf_size": 4096, 00:15:32.409 "send_buf_size": 4096, 00:15:32.409 "enable_recv_pipe": true, 00:15:32.409 "enable_quickack": false, 00:15:32.409 "enable_placement_id": 0, 00:15:32.409 "enable_zerocopy_send_server": true, 00:15:32.409 "enable_zerocopy_send_client": false, 00:15:32.409 "zerocopy_threshold": 0, 00:15:32.409 "tls_version": 0, 00:15:32.409 "enable_ktls": false 00:15:32.409 } 00:15:32.409 }, 00:15:32.409 { 00:15:32.409 "method": "sock_impl_set_options", 00:15:32.409 "params": { 00:15:32.409 "impl_name": "posix", 00:15:32.409 "recv_buf_size": 2097152, 00:15:32.409 "send_buf_size": 2097152, 00:15:32.409 "enable_recv_pipe": true, 00:15:32.409 "enable_quickack": false, 00:15:32.409 "enable_placement_id": 0, 00:15:32.409 "enable_zerocopy_send_server": true, 00:15:32.409 "enable_zerocopy_send_client": false, 00:15:32.409 "zerocopy_threshold": 0, 00:15:32.409 "tls_version": 0, 00:15:32.409 "enable_ktls": false 00:15:32.409 } 00:15:32.409 }, 00:15:32.409 { 00:15:32.409 "method": "sock_impl_set_options", 00:15:32.409 "params": { 00:15:32.409 "impl_name": "uring", 00:15:32.409 "recv_buf_size": 2097152, 00:15:32.409 "send_buf_size": 2097152, 00:15:32.409 "enable_recv_pipe": true, 00:15:32.409 "enable_quickack": false, 00:15:32.409 "enable_placement_id": 0, 00:15:32.409 "enable_zerocopy_send_server": false, 00:15:32.409 "enable_zerocopy_send_client": false, 00:15:32.409 "zerocopy_threshold": 0, 00:15:32.409 "tls_version": 0, 00:15:32.409 "enable_ktls": false 00:15:32.409 } 00:15:32.409 } 00:15:32.409 ] 00:15:32.409 }, 00:15:32.409 { 00:15:32.409 "subsystem": "vmd", 00:15:32.409 "config": [] 00:15:32.409 }, 00:15:32.409 { 00:15:32.409 "subsystem": "accel", 00:15:32.409 "config": [ 00:15:32.409 { 00:15:32.409 "method": "accel_set_options", 00:15:32.409 "params": { 00:15:32.409 "small_cache_size": 128, 00:15:32.409 "large_cache_size": 16, 00:15:32.409 "task_count": 2048, 00:15:32.409 "sequence_count": 2048, 00:15:32.409 "buf_count": 2048 00:15:32.409 } 00:15:32.409 } 00:15:32.409 ] 00:15:32.409 }, 00:15:32.409 { 00:15:32.409 "subsystem": "bdev", 00:15:32.409 "config": [ 00:15:32.409 { 00:15:32.409 "method": "bdev_set_options", 00:15:32.409 "params": { 00:15:32.409 "bdev_io_pool_size": 65535, 00:15:32.409 "bdev_io_cache_size": 256, 00:15:32.409 "bdev_auto_examine": true, 00:15:32.409 "iobuf_small_cache_size": 128, 00:15:32.409 "iobuf_large_cache_size": 16 00:15:32.409 } 00:15:32.409 }, 00:15:32.409 { 00:15:32.409 "method": "bdev_raid_set_options", 00:15:32.409 "params": { 00:15:32.409 "process_window_size_kb": 1024, 00:15:32.409 "process_max_bandwidth_mb_sec": 0 00:15:32.409 } 00:15:32.409 }, 00:15:32.409 { 00:15:32.409 "method": "bdev_iscsi_set_options", 00:15:32.409 "params": { 00:15:32.409 "timeout_sec": 30 00:15:32.409 } 00:15:32.409 }, 00:15:32.409 { 00:15:32.409 "method": "bdev_nvme_set_options", 00:15:32.409 "params": { 00:15:32.409 "action_on_timeout": "none", 00:15:32.409 "timeout_us": 0, 00:15:32.409 "timeout_admin_us": 0, 00:15:32.409 "keep_alive_timeout_ms": 10000, 00:15:32.409 "arbitration_burst": 0, 00:15:32.409 "low_priority_weight": 0, 00:15:32.410 "medium_priority_weight": 0, 00:15:32.410 "high_priority_weight": 0, 00:15:32.410 "nvme_adminq_poll_period_us": 10000, 00:15:32.410 "nvme_ioq_poll_period_us": 0, 00:15:32.410 "io_queue_requests": 512, 00:15:32.410 "delay_cmd_submit": true, 00:15:32.410 "transport_retry_count": 4, 00:15:32.410 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:32.410 "bdev_retry_count": 3, 00:15:32.410 "transport_ack_timeout": 0, 00:15:32.410 "ctrlr_loss_timeout_sec": 0, 00:15:32.410 "reconnect_delay_sec": 0, 00:15:32.410 "fast_io_fail_timeout_sec": 0, 00:15:32.410 "disable_auto_failback": false, 00:15:32.410 "generate_uuids": false, 00:15:32.410 "transport_tos": 0, 00:15:32.410 "nvme_error_stat": false, 00:15:32.410 "rdma_srq_size": 0, 00:15:32.410 "io_path_stat": false, 00:15:32.410 "allow_accel_sequence": false, 00:15:32.410 "rdma_max_cq_size": 0, 00:15:32.410 "rdma_cm_event_timeout_ms": 0, 00:15:32.410 "dhchap_digests": [ 00:15:32.410 "sha256", 00:15:32.410 "sha384", 00:15:32.410 "sha512" 00:15:32.410 ], 00:15:32.410 "dhchap_dhgroups": [ 00:15:32.410 "null", 00:15:32.410 "ffdhe2048", 00:15:32.410 "ffdhe3072", 00:15:32.410 "ffdhe4096", 00:15:32.410 "ffdhe6144", 00:15:32.410 "ffdhe8192" 00:15:32.410 ] 00:15:32.410 } 00:15:32.410 }, 00:15:32.410 { 00:15:32.410 "method": "bdev_nvme_attach_controller", 00:15:32.410 "params": { 00:15:32.410 "name": "nvme0", 00:15:32.410 "trtype": "TCP", 00:15:32.410 "adrfam": "IPv4", 00:15:32.410 "traddr": "10.0.0.3", 00:15:32.410 "trsvcid": "4420", 00:15:32.410 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:32.410 "prchk_reftag": false, 00:15:32.410 "prchk_guard": false, 00:15:32.410 "ctrlr_loss_timeout_sec": 0, 00:15:32.410 "reconnect_delay_sec": 0, 00:15:32.410 "fast_io_fail_timeout_sec": 0, 00:15:32.410 "psk": "key0", 00:15:32.410 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:32.410 "hdgst": false, 00:15:32.410 "ddgst": false, 00:15:32.410 "multipath": "multipath" 00:15:32.410 } 00:15:32.410 }, 00:15:32.410 { 00:15:32.410 "method": "bdev_nvme_set_hotplug", 00:15:32.410 "params": { 00:15:32.410 "period_us": 100000, 00:15:32.410 "enable": false 00:15:32.410 } 00:15:32.410 }, 00:15:32.410 { 00:15:32.410 "method": "bdev_enable_histogram", 00:15:32.410 "params": { 00:15:32.410 "name": "nvme0n1", 00:15:32.410 "enable": true 00:15:32.410 } 00:15:32.410 }, 00:15:32.410 { 00:15:32.410 "method": "bdev_wait_for_examine" 00:15:32.410 } 00:15:32.410 ] 00:15:32.410 }, 00:15:32.410 { 00:15:32.410 "subsystem": "nbd", 00:15:32.410 "config": [] 00:15:32.410 } 00:15:32.410 ] 00:15:32.410 }' 00:15:32.410 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:32.410 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.410 16:10:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.410 [2024-11-19 16:10:38.868317] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:32.410 [2024-11-19 16:10:38.868614] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85242 ] 00:15:32.410 [2024-11-19 16:10:39.022981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.410 [2024-11-19 16:10:39.046594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.667 [2024-11-19 16:10:39.161513] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:32.667 [2024-11-19 16:10:39.190539] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:33.604 16:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.604 16:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:15:33.604 16:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:33.604 16:10:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:33.604 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.604 16:10:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:33.881 Running I/O for 1 seconds... 00:15:34.872 3968.00 IOPS, 15.50 MiB/s 00:15:34.872 Latency(us) 00:15:34.872 [2024-11-19T16:10:41.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.872 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:34.872 Verification LBA range: start 0x0 length 0x2000 00:15:34.872 nvme0n1 : 1.03 3994.30 15.60 0.00 0.00 31636.30 6285.50 21924.77 00:15:34.872 [2024-11-19T16:10:41.587Z] =================================================================================================================== 00:15:34.872 [2024-11-19T16:10:41.587Z] Total : 3994.30 15.60 0.00 0.00 31636.30 6285.50 21924.77 00:15:34.872 { 00:15:34.872 "results": [ 00:15:34.872 { 00:15:34.872 "job": "nvme0n1", 00:15:34.872 "core_mask": "0x2", 00:15:34.872 "workload": "verify", 00:15:34.872 "status": "finished", 00:15:34.872 "verify_range": { 00:15:34.872 "start": 0, 00:15:34.872 "length": 8192 00:15:34.872 }, 00:15:34.872 "queue_depth": 128, 00:15:34.872 "io_size": 4096, 00:15:34.872 "runtime": 1.02546, 00:15:34.872 "iops": 3994.304994831588, 00:15:34.872 "mibps": 15.60275388606089, 00:15:34.872 "io_failed": 0, 00:15:34.872 "io_timeout": 0, 00:15:34.872 "avg_latency_us": 31636.298181818183, 00:15:34.872 "min_latency_us": 6285.498181818181, 00:15:34.873 "max_latency_us": 21924.77090909091 00:15:34.873 } 00:15:34.873 ], 00:15:34.873 "core_count": 1 00:15:34.873 } 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:34.873 nvmf_trace.0 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 85242 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85242 ']' 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85242 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85242 00:15:34.873 killing process with pid 85242 00:15:34.873 Received shutdown signal, test time was about 1.000000 seconds 00:15:34.873 00:15:34.873 Latency(us) 00:15:34.873 [2024-11-19T16:10:41.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.873 [2024-11-19T16:10:41.588Z] =================================================================================================================== 00:15:34.873 [2024-11-19T16:10:41.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85242' 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85242 00:15:34.873 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85242 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:35.131 rmmod nvme_tcp 00:15:35.131 rmmod nvme_fabrics 00:15:35.131 rmmod nvme_keyring 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 85201 ']' 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 85201 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 85201 ']' 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 85201 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:35.131 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85201 00:15:35.390 killing process with pid 85201 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85201' 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 85201 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 85201 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:35.390 16:10:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:15:35.390 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:35.390 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:35.390 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:35.390 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:35.390 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:35.390 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.390 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:35.390 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:35.390 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:35.390 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:35.390 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.fjQdx5dqyP /tmp/tmp.6uGShkfUU2 /tmp/tmp.tTehi7oCd0 00:15:35.648 00:15:35.648 real 1m20.370s 00:15:35.648 user 2m10.536s 00:15:35.648 sys 0m26.524s 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:35.648 ************************************ 00:15:35.648 END TEST nvmf_tls 00:15:35.648 ************************************ 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:35.648 ************************************ 00:15:35.648 START TEST nvmf_fips 00:15:35.648 ************************************ 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:35.648 * Looking for test storage... 00:15:35.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:15:35.648 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.908 --rc genhtml_branch_coverage=1 00:15:35.908 --rc genhtml_function_coverage=1 00:15:35.908 --rc genhtml_legend=1 00:15:35.908 --rc geninfo_all_blocks=1 00:15:35.908 --rc geninfo_unexecuted_blocks=1 00:15:35.908 00:15:35.908 ' 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.908 --rc genhtml_branch_coverage=1 00:15:35.908 --rc genhtml_function_coverage=1 00:15:35.908 --rc genhtml_legend=1 00:15:35.908 --rc geninfo_all_blocks=1 00:15:35.908 --rc geninfo_unexecuted_blocks=1 00:15:35.908 00:15:35.908 ' 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.908 --rc genhtml_branch_coverage=1 00:15:35.908 --rc genhtml_function_coverage=1 00:15:35.908 --rc genhtml_legend=1 00:15:35.908 --rc geninfo_all_blocks=1 00:15:35.908 --rc geninfo_unexecuted_blocks=1 00:15:35.908 00:15:35.908 ' 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:35.908 --rc genhtml_branch_coverage=1 00:15:35.908 --rc genhtml_function_coverage=1 00:15:35.908 --rc genhtml_legend=1 00:15:35.908 --rc geninfo_all_blocks=1 00:15:35.908 --rc geninfo_unexecuted_blocks=1 00:15:35.908 00:15:35.908 ' 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:15:35.908 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:35.909 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:15:35.909 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:35.910 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:35.910 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:15:35.910 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.910 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:15:35.910 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.910 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:15:35.910 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:35.910 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:15:35.910 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:15:35.910 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:15:36.169 Error setting digest 00:15:36.169 4062371F217F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:36.169 4062371F217F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:36.169 Cannot find device "nvmf_init_br" 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:36.169 Cannot find device "nvmf_init_br2" 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:36.169 Cannot find device "nvmf_tgt_br" 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.169 Cannot find device "nvmf_tgt_br2" 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:36.169 Cannot find device "nvmf_init_br" 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:36.169 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:36.170 Cannot find device "nvmf_init_br2" 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:36.170 Cannot find device "nvmf_tgt_br" 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:36.170 Cannot find device "nvmf_tgt_br2" 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:36.170 Cannot find device "nvmf_br" 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:36.170 Cannot find device "nvmf_init_if" 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:36.170 Cannot find device "nvmf_init_if2" 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:36.170 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:36.429 16:10:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:36.429 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:36.429 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:15:36.429 00:15:36.429 --- 10.0.0.3 ping statistics --- 00:15:36.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.429 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:36.429 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:36.429 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:15:36.429 00:15:36.429 --- 10.0.0.4 ping statistics --- 00:15:36.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.429 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:36.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:15:36.429 00:15:36.429 --- 10.0.0.1 ping statistics --- 00:15:36.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.429 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:36.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:36.429 00:15:36.429 --- 10.0.0.2 ping statistics --- 00:15:36.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.429 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:36.429 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.430 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:36.430 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:36.430 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=85556 00:15:36.430 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:36.430 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 85556 00:15:36.430 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85556 ']' 00:15:36.430 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.430 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.430 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.430 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.430 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:36.689 [2024-11-19 16:10:43.158727] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:36.689 [2024-11-19 16:10:43.159459] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.689 [2024-11-19 16:10:43.322014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.689 [2024-11-19 16:10:43.345853] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.689 [2024-11-19 16:10:43.346187] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.689 [2024-11-19 16:10:43.346214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.689 [2024-11-19 16:10:43.346225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.689 [2024-11-19 16:10:43.346234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.689 [2024-11-19 16:10:43.346625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.689 [2024-11-19 16:10:43.382332] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.gc0 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.gc0 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.gc0 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.gc0 00:15:36.949 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:37.208 [2024-11-19 16:10:43.781682] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.208 [2024-11-19 16:10:43.797633] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:37.208 [2024-11-19 16:10:43.797878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:37.208 malloc0 00:15:37.208 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:37.208 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=85590 00:15:37.208 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:37.208 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 85590 /var/tmp/bdevperf.sock 00:15:37.208 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 85590 ']' 00:15:37.208 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:37.208 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:37.208 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:37.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:37.208 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:37.208 16:10:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:37.467 [2024-11-19 16:10:43.949821] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:37.467 [2024-11-19 16:10:43.950189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85590 ] 00:15:37.467 [2024-11-19 16:10:44.105566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.467 [2024-11-19 16:10:44.129428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.467 [2024-11-19 16:10:44.162260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:37.725 16:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.725 16:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:15:37.725 16:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.gc0 00:15:37.983 16:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:38.242 [2024-11-19 16:10:44.728991] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:38.242 TLSTESTn1 00:15:38.242 16:10:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:38.242 Running I/O for 10 seconds... 00:15:40.556 3968.00 IOPS, 15.50 MiB/s [2024-11-19T16:10:48.209Z] 4051.50 IOPS, 15.83 MiB/s [2024-11-19T16:10:49.146Z] 4170.67 IOPS, 16.29 MiB/s [2024-11-19T16:10:50.088Z] 4164.75 IOPS, 16.27 MiB/s [2024-11-19T16:10:51.023Z] 4132.60 IOPS, 16.14 MiB/s [2024-11-19T16:10:51.959Z] 4063.33 IOPS, 15.87 MiB/s [2024-11-19T16:10:53.336Z] 3954.57 IOPS, 15.45 MiB/s [2024-11-19T16:10:54.296Z] 3972.25 IOPS, 15.52 MiB/s [2024-11-19T16:10:55.235Z] 3984.56 IOPS, 15.56 MiB/s [2024-11-19T16:10:55.235Z] 3951.60 IOPS, 15.44 MiB/s 00:15:48.520 Latency(us) 00:15:48.520 [2024-11-19T16:10:55.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.520 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:48.520 Verification LBA range: start 0x0 length 0x2000 00:15:48.520 TLSTESTn1 : 10.02 3956.89 15.46 0.00 0.00 32288.78 6166.34 31933.91 00:15:48.520 [2024-11-19T16:10:55.235Z] =================================================================================================================== 00:15:48.520 [2024-11-19T16:10:55.235Z] Total : 3956.89 15.46 0.00 0.00 32288.78 6166.34 31933.91 00:15:48.520 { 00:15:48.520 "results": [ 00:15:48.520 { 00:15:48.520 "job": "TLSTESTn1", 00:15:48.520 "core_mask": "0x4", 00:15:48.520 "workload": "verify", 00:15:48.520 "status": "finished", 00:15:48.520 "verify_range": { 00:15:48.520 "start": 0, 00:15:48.520 "length": 8192 00:15:48.520 }, 00:15:48.520 "queue_depth": 128, 00:15:48.520 "io_size": 4096, 00:15:48.520 "runtime": 10.01847, 00:15:48.520 "iops": 3956.8916211756887, 00:15:48.520 "mibps": 15.456607895217534, 00:15:48.520 "io_failed": 0, 00:15:48.520 "io_timeout": 0, 00:15:48.520 "avg_latency_us": 32288.78240910696, 00:15:48.520 "min_latency_us": 6166.341818181818, 00:15:48.520 "max_latency_us": 31933.905454545453 00:15:48.520 } 00:15:48.520 ], 00:15:48.520 "core_count": 1 00:15:48.520 } 00:15:48.520 16:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:48.520 16:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:48.520 16:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:15:48.520 16:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:15:48.520 16:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:15:48.520 16:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:48.520 16:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:15:48.520 16:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:15:48.520 16:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:15:48.520 16:10:54 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:48.520 nvmf_trace.0 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 85590 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85590 ']' 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85590 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85590 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:48.520 killing process with pid 85590 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85590' 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85590 00:15:48.520 Received shutdown signal, test time was about 10.000000 seconds 00:15:48.520 00:15:48.520 Latency(us) 00:15:48.520 [2024-11-19T16:10:55.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:48.520 [2024-11-19T16:10:55.235Z] =================================================================================================================== 00:15:48.520 [2024-11-19T16:10:55.235Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85590 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:48.520 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:48.779 rmmod nvme_tcp 00:15:48.779 rmmod nvme_fabrics 00:15:48.779 rmmod nvme_keyring 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 85556 ']' 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 85556 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 85556 ']' 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 85556 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85556 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:48.779 killing process with pid 85556 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85556' 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 85556 00:15:48.779 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 85556 00:15:49.037 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:49.037 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:49.037 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:49.037 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:49.037 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:15:49.037 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:49.037 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:15:49.037 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:49.037 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.038 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.gc0 00:15:49.297 ************************************ 00:15:49.297 END TEST nvmf_fips 00:15:49.297 ************************************ 00:15:49.297 00:15:49.297 real 0m13.488s 00:15:49.297 user 0m17.832s 00:15:49.297 sys 0m5.804s 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:49.297 ************************************ 00:15:49.297 START TEST nvmf_control_msg_list 00:15:49.297 ************************************ 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:49.297 * Looking for test storage... 00:15:49.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:49.297 16:10:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.297 --rc genhtml_branch_coverage=1 00:15:49.297 --rc genhtml_function_coverage=1 00:15:49.297 --rc genhtml_legend=1 00:15:49.297 --rc geninfo_all_blocks=1 00:15:49.297 --rc geninfo_unexecuted_blocks=1 00:15:49.297 00:15:49.297 ' 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.297 --rc genhtml_branch_coverage=1 00:15:49.297 --rc genhtml_function_coverage=1 00:15:49.297 --rc genhtml_legend=1 00:15:49.297 --rc geninfo_all_blocks=1 00:15:49.297 --rc geninfo_unexecuted_blocks=1 00:15:49.297 00:15:49.297 ' 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.297 --rc genhtml_branch_coverage=1 00:15:49.297 --rc genhtml_function_coverage=1 00:15:49.297 --rc genhtml_legend=1 00:15:49.297 --rc geninfo_all_blocks=1 00:15:49.297 --rc geninfo_unexecuted_blocks=1 00:15:49.297 00:15:49.297 ' 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:49.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:49.297 --rc genhtml_branch_coverage=1 00:15:49.297 --rc genhtml_function_coverage=1 00:15:49.297 --rc genhtml_legend=1 00:15:49.297 --rc geninfo_all_blocks=1 00:15:49.297 --rc geninfo_unexecuted_blocks=1 00:15:49.297 00:15:49.297 ' 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.297 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.574 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:49.575 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:49.575 Cannot find device "nvmf_init_br" 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:49.575 Cannot find device "nvmf_init_br2" 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:49.575 Cannot find device "nvmf_tgt_br" 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:49.575 Cannot find device "nvmf_tgt_br2" 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:49.575 Cannot find device "nvmf_init_br" 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:49.575 Cannot find device "nvmf_init_br2" 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:49.575 Cannot find device "nvmf_tgt_br" 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:49.575 Cannot find device "nvmf_tgt_br2" 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:49.575 Cannot find device "nvmf_br" 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:49.575 Cannot find device "nvmf_init_if" 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:49.575 Cannot find device "nvmf_init_if2" 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:49.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:49.575 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:49.575 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:49.576 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:49.576 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:49.576 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:49.576 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:49.576 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:49.834 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:49.834 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:49.834 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:49.834 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:49.835 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:49.835 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:15:49.835 00:15:49.835 --- 10.0.0.3 ping statistics --- 00:15:49.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.835 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:49.835 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:49.835 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:15:49.835 00:15:49.835 --- 10.0.0.4 ping statistics --- 00:15:49.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.835 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:49.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:49.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:15:49.835 00:15:49.835 --- 10.0.0.1 ping statistics --- 00:15:49.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.835 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:49.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:49.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:15:49.835 00:15:49.835 --- 10.0.0.2 ping statistics --- 00:15:49.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:49.835 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=85968 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 85968 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 85968 ']' 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:49.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:49.835 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:49.835 [2024-11-19 16:10:56.504209] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:49.835 [2024-11-19 16:10:56.504347] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.094 [2024-11-19 16:10:56.655402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.094 [2024-11-19 16:10:56.676197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.094 [2024-11-19 16:10:56.676275] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.094 [2024-11-19 16:10:56.676288] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.094 [2024-11-19 16:10:56.676297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.094 [2024-11-19 16:10:56.676304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.094 [2024-11-19 16:10:56.676620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.094 [2024-11-19 16:10:56.707946] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:50.094 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:50.094 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:15:50.094 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:50.094 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:50.094 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.094 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:50.094 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:50.094 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:50.094 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:50.094 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.094 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.094 [2024-11-19 16:10:56.804532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.353 Malloc0 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:50.353 [2024-11-19 16:10:56.839632] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=85987 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=85988 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=85989 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:50.353 16:10:56 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 85987 00:15:50.353 [2024-11-19 16:10:57.028124] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:50.353 [2024-11-19 16:10:57.028350] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:50.353 [2024-11-19 16:10:57.038150] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:51.730 Initializing NVMe Controllers 00:15:51.730 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.730 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:51.730 Initialization complete. Launching workers. 00:15:51.730 ======================================================== 00:15:51.730 Latency(us) 00:15:51.730 Device Information : IOPS MiB/s Average min max 00:15:51.730 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3018.00 11.79 331.00 185.89 1751.41 00:15:51.730 ======================================================== 00:15:51.730 Total : 3018.00 11.79 331.00 185.89 1751.41 00:15:51.730 00:15:51.730 Initializing NVMe Controllers 00:15:51.730 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.730 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:51.730 Initialization complete. Launching workers. 00:15:51.730 ======================================================== 00:15:51.730 Latency(us) 00:15:51.730 Device Information : IOPS MiB/s Average min max 00:15:51.730 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3018.00 11.79 330.91 207.90 1747.34 00:15:51.730 ======================================================== 00:15:51.730 Total : 3018.00 11.79 330.91 207.90 1747.34 00:15:51.730 00:15:51.730 Initializing NVMe Controllers 00:15:51.730 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:51.730 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:51.730 Initialization complete. Launching workers. 00:15:51.730 ======================================================== 00:15:51.730 Latency(us) 00:15:51.730 Device Information : IOPS MiB/s Average min max 00:15:51.730 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3027.96 11.83 329.70 129.11 1748.31 00:15:51.730 ======================================================== 00:15:51.730 Total : 3027.96 11.83 329.70 129.11 1748.31 00:15:51.730 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 85988 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 85989 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:51.730 rmmod nvme_tcp 00:15:51.730 rmmod nvme_fabrics 00:15:51.730 rmmod nvme_keyring 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 85968 ']' 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 85968 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 85968 ']' 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 85968 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85968 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.730 killing process with pid 85968 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85968' 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 85968 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 85968 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:51.730 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:51.990 00:15:51.990 real 0m2.815s 00:15:51.990 user 0m4.659s 00:15:51.990 sys 0m1.319s 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:51.990 ************************************ 00:15:51.990 END TEST nvmf_control_msg_list 00:15:51.990 ************************************ 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:51.990 ************************************ 00:15:51.990 START TEST nvmf_wait_for_buf 00:15:51.990 ************************************ 00:15:51.990 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:52.251 * Looking for test storage... 00:15:52.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:52.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.251 --rc genhtml_branch_coverage=1 00:15:52.251 --rc genhtml_function_coverage=1 00:15:52.251 --rc genhtml_legend=1 00:15:52.251 --rc geninfo_all_blocks=1 00:15:52.251 --rc geninfo_unexecuted_blocks=1 00:15:52.251 00:15:52.251 ' 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:52.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.251 --rc genhtml_branch_coverage=1 00:15:52.251 --rc genhtml_function_coverage=1 00:15:52.251 --rc genhtml_legend=1 00:15:52.251 --rc geninfo_all_blocks=1 00:15:52.251 --rc geninfo_unexecuted_blocks=1 00:15:52.251 00:15:52.251 ' 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:52.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.251 --rc genhtml_branch_coverage=1 00:15:52.251 --rc genhtml_function_coverage=1 00:15:52.251 --rc genhtml_legend=1 00:15:52.251 --rc geninfo_all_blocks=1 00:15:52.251 --rc geninfo_unexecuted_blocks=1 00:15:52.251 00:15:52.251 ' 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:52.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.251 --rc genhtml_branch_coverage=1 00:15:52.251 --rc genhtml_function_coverage=1 00:15:52.251 --rc genhtml_legend=1 00:15:52.251 --rc geninfo_all_blocks=1 00:15:52.251 --rc geninfo_unexecuted_blocks=1 00:15:52.251 00:15:52.251 ' 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:52.251 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:52.252 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:52.252 Cannot find device "nvmf_init_br" 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:52.252 Cannot find device "nvmf_init_br2" 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:52.252 Cannot find device "nvmf_tgt_br" 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:52.252 Cannot find device "nvmf_tgt_br2" 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:52.252 Cannot find device "nvmf_init_br" 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:52.252 Cannot find device "nvmf_init_br2" 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:52.252 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:52.511 Cannot find device "nvmf_tgt_br" 00:15:52.511 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:52.511 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:52.511 Cannot find device "nvmf_tgt_br2" 00:15:52.511 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:52.511 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:52.511 Cannot find device "nvmf_br" 00:15:52.511 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:52.511 16:10:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:52.511 Cannot find device "nvmf_init_if" 00:15:52.511 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:52.511 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:52.511 Cannot find device "nvmf_init_if2" 00:15:52.511 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:52.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:52.512 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:52.512 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:52.770 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:52.770 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:15:52.770 00:15:52.770 --- 10.0.0.3 ping statistics --- 00:15:52.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.770 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:52.770 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:52.770 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:15:52.770 00:15:52.770 --- 10.0.0.4 ping statistics --- 00:15:52.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.770 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:52.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:52.770 00:15:52.770 --- 10.0.0.1 ping statistics --- 00:15:52.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.770 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:52.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:15:52.770 00:15:52.770 --- 10.0.0.2 ping statistics --- 00:15:52.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.770 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=86224 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 86224 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 86224 ']' 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.770 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:52.770 [2024-11-19 16:10:59.357341] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:52.770 [2024-11-19 16:10:59.357714] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.029 [2024-11-19 16:10:59.514856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.030 [2024-11-19 16:10:59.543090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.030 [2024-11-19 16:10:59.543164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.030 [2024-11-19 16:10:59.543197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:53.030 [2024-11-19 16:10:59.543223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:53.030 [2024-11-19 16:10:59.543273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.030 [2024-11-19 16:10:59.543733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.030 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.289 [2024-11-19 16:10:59.745313] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.289 Malloc0 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.289 [2024-11-19 16:10:59.791578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:53.289 [2024-11-19 16:10:59.819691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.289 16:10:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:53.548 [2024-11-19 16:11:00.025430] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:54.926 Initializing NVMe Controllers 00:15:54.926 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:54.926 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:15:54.926 Initialization complete. Launching workers. 00:15:54.926 ======================================================== 00:15:54.926 Latency(us) 00:15:54.926 Device Information : IOPS MiB/s Average min max 00:15:54.926 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 483.96 60.49 8265.41 6051.80 15991.89 00:15:54.926 ======================================================== 00:15:54.926 Total : 483.96 60.49 8265.41 6051.80 15991.89 00:15:54.926 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4598 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4598 -eq 0 ]] 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:54.926 rmmod nvme_tcp 00:15:54.926 rmmod nvme_fabrics 00:15:54.926 rmmod nvme_keyring 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 86224 ']' 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 86224 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 86224 ']' 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 86224 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86224 00:15:54.926 killing process with pid 86224 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86224' 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 86224 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 86224 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:54.926 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:15:55.186 00:15:55.186 real 0m3.180s 00:15:55.186 user 0m2.599s 00:15:55.186 sys 0m0.770s 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:55.186 ************************************ 00:15:55.186 END TEST nvmf_wait_for_buf 00:15:55.186 ************************************ 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.186 16:11:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:55.446 ************************************ 00:15:55.446 START TEST nvmf_fuzz 00:15:55.446 ************************************ 00:15:55.446 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:15:55.446 * Looking for test storage... 00:15:55.446 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:55.446 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:55.446 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:55.446 16:11:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.446 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:55.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.446 --rc genhtml_branch_coverage=1 00:15:55.446 --rc genhtml_function_coverage=1 00:15:55.446 --rc genhtml_legend=1 00:15:55.446 --rc geninfo_all_blocks=1 00:15:55.446 --rc geninfo_unexecuted_blocks=1 00:15:55.446 00:15:55.446 ' 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:55.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.447 --rc genhtml_branch_coverage=1 00:15:55.447 --rc genhtml_function_coverage=1 00:15:55.447 --rc genhtml_legend=1 00:15:55.447 --rc geninfo_all_blocks=1 00:15:55.447 --rc geninfo_unexecuted_blocks=1 00:15:55.447 00:15:55.447 ' 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:55.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.447 --rc genhtml_branch_coverage=1 00:15:55.447 --rc genhtml_function_coverage=1 00:15:55.447 --rc genhtml_legend=1 00:15:55.447 --rc geninfo_all_blocks=1 00:15:55.447 --rc geninfo_unexecuted_blocks=1 00:15:55.447 00:15:55.447 ' 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:55.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.447 --rc genhtml_branch_coverage=1 00:15:55.447 --rc genhtml_function_coverage=1 00:15:55.447 --rc genhtml_legend=1 00:15:55.447 --rc geninfo_all_blocks=1 00:15:55.447 --rc geninfo_unexecuted_blocks=1 00:15:55.447 00:15:55.447 ' 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:55.447 Cannot find device "nvmf_init_br" 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:55.447 Cannot find device "nvmf_init_br2" 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:15:55.447 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:55.707 Cannot find device "nvmf_tgt_br" 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:55.707 Cannot find device "nvmf_tgt_br2" 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:55.707 Cannot find device "nvmf_init_br" 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:55.707 Cannot find device "nvmf_init_br2" 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:55.707 Cannot find device "nvmf_tgt_br" 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:55.707 Cannot find device "nvmf_tgt_br2" 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:55.707 Cannot find device "nvmf_br" 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:55.707 Cannot find device "nvmf_init_if" 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:55.707 Cannot find device "nvmf_init_if2" 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:55.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:55.707 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:55.707 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:55.967 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:55.967 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.092 ms 00:15:55.967 00:15:55.967 --- 10.0.0.3 ping statistics --- 00:15:55.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.967 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:55.967 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:55.967 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:55.967 00:15:55.967 --- 10.0.0.4 ping statistics --- 00:15:55.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.967 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:55.967 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.967 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:15:55.967 00:15:55.967 --- 10.0.0.1 ping statistics --- 00:15:55.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.967 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:55.967 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.967 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:15:55.967 00:15:55.967 --- 10.0.0.2 ping statistics --- 00:15:55.967 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.967 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=86480 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 86480 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 86480 ']' 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.967 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 Malloc0 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.227 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:56.486 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.486 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:15:56.486 16:11:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:15:56.745 Shutting down the fuzz application 00:15:56.745 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:15:56.745 Shutting down the fuzz application 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:57.005 rmmod nvme_tcp 00:15:57.005 rmmod nvme_fabrics 00:15:57.005 rmmod nvme_keyring 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 86480 ']' 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 86480 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 86480 ']' 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 86480 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86480 00:15:57.005 killing process with pid 86480 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86480' 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 86480 00:15:57.005 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 86480 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.265 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.524 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:57.524 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.524 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.524 16:11:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:15:57.524 00:15:57.524 real 0m2.129s 00:15:57.524 user 0m1.721s 00:15:57.524 sys 0m0.671s 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:57.524 ************************************ 00:15:57.524 END TEST nvmf_fuzz 00:15:57.524 ************************************ 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:57.524 ************************************ 00:15:57.524 START TEST nvmf_multiconnection 00:15:57.524 ************************************ 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:15:57.524 * Looking for test storage... 00:15:57.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:15:57.524 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:57.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.785 --rc genhtml_branch_coverage=1 00:15:57.785 --rc genhtml_function_coverage=1 00:15:57.785 --rc genhtml_legend=1 00:15:57.785 --rc geninfo_all_blocks=1 00:15:57.785 --rc geninfo_unexecuted_blocks=1 00:15:57.785 00:15:57.785 ' 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:57.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.785 --rc genhtml_branch_coverage=1 00:15:57.785 --rc genhtml_function_coverage=1 00:15:57.785 --rc genhtml_legend=1 00:15:57.785 --rc geninfo_all_blocks=1 00:15:57.785 --rc geninfo_unexecuted_blocks=1 00:15:57.785 00:15:57.785 ' 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:57.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.785 --rc genhtml_branch_coverage=1 00:15:57.785 --rc genhtml_function_coverage=1 00:15:57.785 --rc genhtml_legend=1 00:15:57.785 --rc geninfo_all_blocks=1 00:15:57.785 --rc geninfo_unexecuted_blocks=1 00:15:57.785 00:15:57.785 ' 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:57.785 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.785 --rc genhtml_branch_coverage=1 00:15:57.785 --rc genhtml_function_coverage=1 00:15:57.785 --rc genhtml_legend=1 00:15:57.785 --rc geninfo_all_blocks=1 00:15:57.785 --rc geninfo_unexecuted_blocks=1 00:15:57.785 00:15:57.785 ' 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.785 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.785 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:57.786 Cannot find device "nvmf_init_br" 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:57.786 Cannot find device "nvmf_init_br2" 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:57.786 Cannot find device "nvmf_tgt_br" 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.786 Cannot find device "nvmf_tgt_br2" 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:57.786 Cannot find device "nvmf_init_br" 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:57.786 Cannot find device "nvmf_init_br2" 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:57.786 Cannot find device "nvmf_tgt_br" 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:57.786 Cannot find device "nvmf_tgt_br2" 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:57.786 Cannot find device "nvmf_br" 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:57.786 Cannot find device "nvmf_init_if" 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:57.786 Cannot find device "nvmf_init_if2" 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.786 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:58.046 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:58.046 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:58.046 00:15:58.046 --- 10.0.0.3 ping statistics --- 00:15:58.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.046 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:58.046 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:58.046 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:15:58.046 00:15:58.046 --- 10.0.0.4 ping statistics --- 00:15:58.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.046 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:58.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:58.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:58.046 00:15:58.046 --- 10.0.0.1 ping statistics --- 00:15:58.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.046 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:58.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:58.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:15:58.046 00:15:58.046 --- 10.0.0.2 ping statistics --- 00:15:58.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:58.046 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.046 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.047 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=86712 00:15:58.047 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:58.047 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 86712 00:15:58.047 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 86712 ']' 00:15:58.047 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.047 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.047 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.047 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.047 16:11:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:58.306 [2024-11-19 16:11:04.784633] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:15:58.306 [2024-11-19 16:11:04.784727] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.306 [2024-11-19 16:11:04.940055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:58.306 [2024-11-19 16:11:04.965365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.306 [2024-11-19 16:11:04.965664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.306 [2024-11-19 16:11:04.965845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:58.306 [2024-11-19 16:11:04.966000] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:58.306 [2024-11-19 16:11:04.966049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.306 [2024-11-19 16:11:04.967079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:58.306 [2024-11-19 16:11:04.967287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.306 [2024-11-19 16:11:04.967157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.306 [2024-11-19 16:11:04.967283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.306 [2024-11-19 16:11:04.999831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.244 [2024-11-19 16:11:05.815553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.244 Malloc1 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.244 [2024-11-19 16:11:05.888634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.244 Malloc2 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.244 Malloc3 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.244 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 Malloc4 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 16:11:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 Malloc5 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 Malloc6 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:15:59.505 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.506 Malloc7 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.506 Malloc8 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.506 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.766 Malloc9 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:15:59.766 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.767 Malloc10 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.767 Malloc11 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:15:59.767 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:00.027 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:16:00.027 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:00.027 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:00.027 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:00.027 16:11:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:01.933 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:01.933 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:01.933 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:16:01.933 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:01.933 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.933 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:01.933 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:01.933 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:16:02.193 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:02.193 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:02.193 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:02.193 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:02.193 16:11:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:04.148 16:11:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:06.683 16:11:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:08.586 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:08.586 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:08.586 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:16:08.586 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:08.586 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:08.586 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:08.586 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:08.586 16:11:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:08.586 16:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:08.586 16:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:08.586 16:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.586 16:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:08.586 16:11:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:10.489 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:10.489 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:10.489 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:16:10.489 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:10.489 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.489 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:10.489 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:10.489 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:10.747 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:10.747 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:10.747 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.747 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:10.747 16:11:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:12.648 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:12.648 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:12.648 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:16:12.648 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:12.648 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:12.648 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:12.648 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:12.648 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:12.906 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:12.906 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:12.906 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:12.906 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:12.906 16:11:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:14.811 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:14.811 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:14.811 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:16:14.811 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:14.811 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:14.811 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:14.811 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:14.811 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:15.071 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:15.071 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:15.071 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.071 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:15.071 16:11:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:16.975 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:16.975 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:16.975 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:16:16.975 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:16.975 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.975 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:16.975 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:16.975 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:17.234 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:17.234 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:17.234 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.234 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:17.234 16:11:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:19.133 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:19.133 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:19.133 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:16:19.133 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:19.133 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.133 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:19.133 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:19.133 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:19.391 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:19.391 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:19.391 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.391 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:19.391 16:11:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:21.317 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:21.317 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:21.317 16:11:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:16:21.317 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:21.317 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.317 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:21.317 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:21.317 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:21.577 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:21.577 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:16:21.577 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:21.577 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:21.577 16:11:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:16:23.480 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:23.480 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:23.480 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:16:23.480 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:23.480 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:23.480 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:16:23.480 16:11:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:23.739 [global] 00:16:23.739 thread=1 00:16:23.739 invalidate=1 00:16:23.739 rw=read 00:16:23.739 time_based=1 00:16:23.739 runtime=10 00:16:23.739 ioengine=libaio 00:16:23.739 direct=1 00:16:23.739 bs=262144 00:16:23.739 iodepth=64 00:16:23.739 norandommap=1 00:16:23.739 numjobs=1 00:16:23.739 00:16:23.739 [job0] 00:16:23.739 filename=/dev/nvme0n1 00:16:23.739 [job1] 00:16:23.739 filename=/dev/nvme10n1 00:16:23.739 [job2] 00:16:23.739 filename=/dev/nvme1n1 00:16:23.739 [job3] 00:16:23.739 filename=/dev/nvme2n1 00:16:23.739 [job4] 00:16:23.739 filename=/dev/nvme3n1 00:16:23.739 [job5] 00:16:23.739 filename=/dev/nvme4n1 00:16:23.739 [job6] 00:16:23.739 filename=/dev/nvme5n1 00:16:23.739 [job7] 00:16:23.739 filename=/dev/nvme6n1 00:16:23.739 [job8] 00:16:23.739 filename=/dev/nvme7n1 00:16:23.739 [job9] 00:16:23.739 filename=/dev/nvme8n1 00:16:23.739 [job10] 00:16:23.739 filename=/dev/nvme9n1 00:16:23.739 Could not set queue depth (nvme0n1) 00:16:23.739 Could not set queue depth (nvme10n1) 00:16:23.739 Could not set queue depth (nvme1n1) 00:16:23.739 Could not set queue depth (nvme2n1) 00:16:23.739 Could not set queue depth (nvme3n1) 00:16:23.739 Could not set queue depth (nvme4n1) 00:16:23.739 Could not set queue depth (nvme5n1) 00:16:23.739 Could not set queue depth (nvme6n1) 00:16:23.739 Could not set queue depth (nvme7n1) 00:16:23.739 Could not set queue depth (nvme8n1) 00:16:23.739 Could not set queue depth (nvme9n1) 00:16:23.998 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.998 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.998 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.998 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.998 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.998 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.998 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.998 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.998 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.998 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.998 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:23.998 fio-3.35 00:16:23.998 Starting 11 threads 00:16:36.214 00:16:36.214 job0: (groupid=0, jobs=1): err= 0: pid=87171: Tue Nov 19 16:11:40 2024 00:16:36.214 read: IOPS=981, BW=245MiB/s (257MB/s)(2461MiB/10035msec) 00:16:36.214 slat (usec): min=21, max=9601, avg=967.64, stdev=1895.57 00:16:36.214 clat (msec): min=11, max=466, avg=64.15, stdev=18.93 00:16:36.214 lat (msec): min=12, max=466, avg=65.12, stdev=19.00 00:16:36.214 clat percentiles (msec): 00:16:36.214 | 1.00th=[ 33], 5.00th=[ 38], 10.00th=[ 59], 20.00th=[ 62], 00:16:36.214 | 30.00th=[ 64], 40.00th=[ 64], 50.00th=[ 65], 60.00th=[ 66], 00:16:36.214 | 70.00th=[ 67], 80.00th=[ 68], 90.00th=[ 70], 95.00th=[ 71], 00:16:36.214 | 99.00th=[ 74], 99.50th=[ 79], 99.90th=[ 464], 99.95th=[ 468], 00:16:36.214 | 99.99th=[ 468] 00:16:36.214 bw ( KiB/s): min=235520, max=308630, per=37.32%, avg=250314.70, stdev=14282.70, samples=20 00:16:36.214 iops : min= 920, max= 1205, avg=977.75, stdev=55.67, samples=20 00:16:36.214 lat (msec) : 20=0.07%, 50=5.87%, 100=93.86%, 500=0.19% 00:16:36.214 cpu : usr=0.55%, sys=3.94%, ctx=2383, majf=0, minf=4097 00:16:36.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:36.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:36.214 issued rwts: total=9845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.214 job1: (groupid=0, jobs=1): err= 0: pid=87174: Tue Nov 19 16:11:40 2024 00:16:36.214 read: IOPS=338, BW=84.6MiB/s (88.7MB/s)(853MiB/10080msec) 00:16:36.214 slat (usec): min=16, max=51841, avg=2748.10, stdev=6464.36 00:16:36.214 clat (msec): min=3, max=602, avg=186.00, stdev=58.28 00:16:36.214 lat (msec): min=4, max=602, avg=188.74, stdev=58.66 00:16:36.214 clat percentiles (msec): 00:16:36.214 | 1.00th=[ 34], 5.00th=[ 62], 10.00th=[ 148], 20.00th=[ 176], 00:16:36.214 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 199], 00:16:36.214 | 70.00th=[ 203], 80.00th=[ 207], 90.00th=[ 215], 95.00th=[ 220], 00:16:36.214 | 99.00th=[ 550], 99.50th=[ 558], 99.90th=[ 558], 99.95th=[ 600], 00:16:36.214 | 99.99th=[ 600] 00:16:36.214 bw ( KiB/s): min=79360, max=125952, per=12.78%, avg=85726.15, stdev=10081.95, samples=20 00:16:36.214 iops : min= 310, max= 492, avg=334.85, stdev=39.39, samples=20 00:16:36.214 lat (msec) : 4=0.03%, 10=0.26%, 20=0.44%, 50=2.55%, 100=5.16% 00:16:36.214 lat (msec) : 250=90.04%, 500=0.50%, 750=1.03% 00:16:36.214 cpu : usr=0.21%, sys=1.40%, ctx=835, majf=0, minf=4097 00:16:36.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:36.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:36.214 issued rwts: total=3412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.214 job2: (groupid=0, jobs=1): err= 0: pid=87175: Tue Nov 19 16:11:40 2024 00:16:36.214 read: IOPS=204, BW=51.0MiB/s (53.5MB/s)(515MiB/10092msec) 00:16:36.214 slat (usec): min=19, max=165450, avg=4857.79, stdev=12495.94 00:16:36.214 clat (msec): min=34, max=486, avg=308.21, stdev=45.01 00:16:36.214 lat (msec): min=35, max=546, avg=313.07, stdev=45.39 00:16:36.214 clat percentiles (msec): 00:16:36.214 | 1.00th=[ 176], 5.00th=[ 251], 10.00th=[ 279], 20.00th=[ 288], 00:16:36.214 | 30.00th=[ 296], 40.00th=[ 305], 50.00th=[ 309], 60.00th=[ 313], 00:16:36.214 | 70.00th=[ 321], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 368], 00:16:36.214 | 99.00th=[ 468], 99.50th=[ 485], 99.90th=[ 489], 99.95th=[ 489], 00:16:36.214 | 99.99th=[ 489] 00:16:36.214 bw ( KiB/s): min=36864, max=56832, per=7.62%, avg=51092.15, stdev=4167.14, samples=20 00:16:36.214 iops : min= 144, max= 222, avg=199.55, stdev=16.26, samples=20 00:16:36.214 lat (msec) : 50=0.44%, 100=0.24%, 250=4.22%, 500=95.10% 00:16:36.214 cpu : usr=0.15%, sys=1.00%, ctx=418, majf=0, minf=4097 00:16:36.214 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:16:36.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.214 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:36.214 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.214 job3: (groupid=0, jobs=1): err= 0: pid=87176: Tue Nov 19 16:11:40 2024 00:16:36.214 read: IOPS=73, BW=18.3MiB/s (19.2MB/s)(186MiB/10155msec) 00:16:36.214 slat (usec): min=24, max=322543, avg=13481.99, stdev=38790.77 00:16:36.214 clat (msec): min=20, max=1309, avg=859.86, stdev=260.05 00:16:36.214 lat (msec): min=21, max=1309, avg=873.34, stdev=263.00 00:16:36.214 clat percentiles (msec): 00:16:36.214 | 1.00th=[ 85], 5.00th=[ 266], 10.00th=[ 355], 20.00th=[ 785], 00:16:36.214 | 30.00th=[ 818], 40.00th=[ 860], 50.00th=[ 927], 60.00th=[ 969], 00:16:36.214 | 70.00th=[ 1028], 80.00th=[ 1053], 90.00th=[ 1099], 95.00th=[ 1133], 00:16:36.214 | 99.00th=[ 1200], 99.50th=[ 1200], 99.90th=[ 1318], 99.95th=[ 1318], 00:16:36.214 | 99.99th=[ 1318] 00:16:36.214 bw ( KiB/s): min= 7680, max=35328, per=2.59%, avg=17380.95, stdev=6492.97, samples=20 00:16:36.214 iops : min= 30, max= 138, avg=67.85, stdev=25.39, samples=20 00:16:36.214 lat (msec) : 50=0.81%, 100=0.67%, 250=2.42%, 500=8.48%, 750=4.17% 00:16:36.214 lat (msec) : 1000=48.86%, 2000=34.59% 00:16:36.214 cpu : usr=0.02%, sys=0.54%, ctx=150, majf=0, minf=4097 00:16:36.214 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:16:36.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.214 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:36.214 issued rwts: total=743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.214 job4: (groupid=0, jobs=1): err= 0: pid=87177: Tue Nov 19 16:11:40 2024 00:16:36.214 read: IOPS=73, BW=18.5MiB/s (19.4MB/s)(188MiB/10155msec) 00:16:36.214 slat (usec): min=20, max=508243, avg=12977.64, stdev=39317.49 00:16:36.214 clat (msec): min=26, max=1147, avg=850.77, stdev=233.50 00:16:36.214 lat (msec): min=28, max=1232, avg=863.75, stdev=235.89 00:16:36.214 clat percentiles (msec): 00:16:36.214 | 1.00th=[ 101], 5.00th=[ 271], 10.00th=[ 422], 20.00th=[ 810], 00:16:36.214 | 30.00th=[ 852], 40.00th=[ 885], 50.00th=[ 911], 60.00th=[ 936], 00:16:36.214 | 70.00th=[ 969], 80.00th=[ 1011], 90.00th=[ 1045], 95.00th=[ 1083], 00:16:36.214 | 99.00th=[ 1133], 99.50th=[ 1133], 99.90th=[ 1150], 99.95th=[ 1150], 00:16:36.214 | 99.99th=[ 1150] 00:16:36.214 bw ( KiB/s): min= 5632, max=34304, per=2.63%, avg=17610.75, stdev=5929.05, samples=20 00:16:36.214 iops : min= 22, max= 134, avg=68.75, stdev=23.14, samples=20 00:16:36.214 lat (msec) : 50=0.40%, 100=0.80%, 250=2.26%, 500=7.46%, 750=3.60% 00:16:36.214 lat (msec) : 1000=63.65%, 2000=21.84% 00:16:36.214 cpu : usr=0.04%, sys=0.39%, ctx=141, majf=0, minf=4097 00:16:36.214 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.3%, >=64=91.6% 00:16:36.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.214 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:36.214 issued rwts: total=751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.214 job5: (groupid=0, jobs=1): err= 0: pid=87178: Tue Nov 19 16:11:40 2024 00:16:36.214 read: IOPS=74, BW=18.7MiB/s (19.6MB/s)(190MiB/10156msec) 00:16:36.214 slat (usec): min=18, max=368082, avg=13232.59, stdev=40248.79 00:16:36.214 clat (msec): min=30, max=1191, avg=842.55, stdev=242.60 00:16:36.214 lat (msec): min=31, max=1263, avg=855.78, stdev=245.35 00:16:36.214 clat percentiles (msec): 00:16:36.214 | 1.00th=[ 37], 5.00th=[ 186], 10.00th=[ 506], 20.00th=[ 768], 00:16:36.214 | 30.00th=[ 818], 40.00th=[ 877], 50.00th=[ 919], 60.00th=[ 953], 00:16:36.214 | 70.00th=[ 978], 80.00th=[ 1003], 90.00th=[ 1053], 95.00th=[ 1083], 00:16:36.214 | 99.00th=[ 1133], 99.50th=[ 1133], 99.90th=[ 1200], 99.95th=[ 1200], 00:16:36.214 | 99.99th=[ 1200] 00:16:36.214 bw ( KiB/s): min= 9216, max=25088, per=2.65%, avg=17765.15, stdev=4304.98, samples=20 00:16:36.215 iops : min= 36, max= 98, avg=69.35, stdev=16.88, samples=20 00:16:36.215 lat (msec) : 50=1.19%, 250=5.54%, 500=2.64%, 750=9.23%, 1000=56.86% 00:16:36.215 lat (msec) : 2000=24.54% 00:16:36.215 cpu : usr=0.01%, sys=0.48%, ctx=135, majf=0, minf=4097 00:16:36.215 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.1%, 32=4.2%, >=64=91.7% 00:16:36.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.215 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:36.215 issued rwts: total=758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.215 job6: (groupid=0, jobs=1): err= 0: pid=87179: Tue Nov 19 16:11:40 2024 00:16:36.215 read: IOPS=346, BW=86.5MiB/s (90.7MB/s)(872MiB/10083msec) 00:16:36.215 slat (usec): min=18, max=44366, avg=2860.32, stdev=6474.29 00:16:36.215 clat (msec): min=13, max=276, avg=181.90, stdev=37.50 00:16:36.215 lat (msec): min=14, max=276, avg=184.76, stdev=38.04 00:16:36.215 clat percentiles (msec): 00:16:36.215 | 1.00th=[ 50], 5.00th=[ 103], 10.00th=[ 121], 20.00th=[ 171], 00:16:36.215 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 197], 00:16:36.215 | 70.00th=[ 201], 80.00th=[ 205], 90.00th=[ 211], 95.00th=[ 218], 00:16:36.215 | 99.00th=[ 236], 99.50th=[ 241], 99.90th=[ 245], 99.95th=[ 275], 00:16:36.215 | 99.99th=[ 275] 00:16:36.215 bw ( KiB/s): min=72192, max=164352, per=13.08%, avg=87697.60, stdev=18938.48, samples=20 00:16:36.215 iops : min= 282, max= 642, avg=342.55, stdev=73.99, samples=20 00:16:36.215 lat (msec) : 20=0.14%, 50=1.72%, 100=3.07%, 250=95.01%, 500=0.06% 00:16:36.215 cpu : usr=0.18%, sys=1.67%, ctx=760, majf=0, minf=4098 00:16:36.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:36.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:36.215 issued rwts: total=3489,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.215 job7: (groupid=0, jobs=1): err= 0: pid=87180: Tue Nov 19 16:11:40 2024 00:16:36.215 read: IOPS=71, BW=18.0MiB/s (18.9MB/s)(183MiB/10143msec) 00:16:36.215 slat (usec): min=23, max=577182, avg=13795.15, stdev=46334.27 00:16:36.215 clat (msec): min=66, max=1282, avg=874.46, stdev=207.47 00:16:36.215 lat (msec): min=70, max=1448, avg=888.25, stdev=210.41 00:16:36.215 clat percentiles (msec): 00:16:36.215 | 1.00th=[ 72], 5.00th=[ 426], 10.00th=[ 667], 20.00th=[ 793], 00:16:36.215 | 30.00th=[ 844], 40.00th=[ 885], 50.00th=[ 927], 60.00th=[ 944], 00:16:36.215 | 70.00th=[ 978], 80.00th=[ 1011], 90.00th=[ 1053], 95.00th=[ 1133], 00:16:36.215 | 99.00th=[ 1183], 99.50th=[ 1183], 99.90th=[ 1284], 99.95th=[ 1284], 00:16:36.215 | 99.99th=[ 1284] 00:16:36.215 bw ( KiB/s): min= 3072, max=32256, per=2.68%, avg=17942.89, stdev=6947.33, samples=19 00:16:36.215 iops : min= 12, max= 126, avg=70.00, stdev=27.12, samples=19 00:16:36.215 lat (msec) : 100=2.47%, 250=1.37%, 500=1.64%, 750=9.59%, 1000=61.10% 00:16:36.215 lat (msec) : 2000=23.84% 00:16:36.215 cpu : usr=0.09%, sys=0.39%, ctx=121, majf=0, minf=4097 00:16:36.215 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:16:36.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.215 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:36.215 issued rwts: total=730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.215 job8: (groupid=0, jobs=1): err= 0: pid=87181: Tue Nov 19 16:11:40 2024 00:16:36.215 read: IOPS=202, BW=50.6MiB/s (53.0MB/s)(510MiB/10083msec) 00:16:36.215 slat (usec): min=23, max=323683, avg=4906.87, stdev=13476.82 00:16:36.215 clat (msec): min=60, max=655, avg=311.09, stdev=58.47 00:16:36.215 lat (msec): min=91, max=655, avg=316.00, stdev=58.68 00:16:36.215 clat percentiles (msec): 00:16:36.215 | 1.00th=[ 131], 5.00th=[ 234], 10.00th=[ 271], 20.00th=[ 288], 00:16:36.215 | 30.00th=[ 296], 40.00th=[ 305], 50.00th=[ 309], 60.00th=[ 313], 00:16:36.215 | 70.00th=[ 321], 80.00th=[ 330], 90.00th=[ 355], 95.00th=[ 384], 00:16:36.215 | 99.00th=[ 550], 99.50th=[ 558], 99.90th=[ 659], 99.95th=[ 659], 00:16:36.215 | 99.99th=[ 659] 00:16:36.215 bw ( KiB/s): min=16896, max=57344, per=7.54%, avg=50569.60, stdev=8555.49, samples=20 00:16:36.215 iops : min= 66, max= 224, avg=197.45, stdev=33.39, samples=20 00:16:36.215 lat (msec) : 100=0.15%, 250=6.18%, 500=91.23%, 750=2.45% 00:16:36.215 cpu : usr=0.14%, sys=0.82%, ctx=451, majf=0, minf=4097 00:16:36.215 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:16:36.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.215 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:36.215 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.215 job9: (groupid=0, jobs=1): err= 0: pid=87182: Tue Nov 19 16:11:40 2024 00:16:36.215 read: IOPS=71, BW=17.8MiB/s (18.7MB/s)(181MiB/10157msec) 00:16:36.215 slat (usec): min=19, max=233192, avg=13877.67, stdev=38446.90 00:16:36.215 clat (msec): min=33, max=1250, avg=882.97, stdev=231.62 00:16:36.215 lat (msec): min=33, max=1282, avg=896.84, stdev=232.30 00:16:36.215 clat percentiles (msec): 00:16:36.215 | 1.00th=[ 53], 5.00th=[ 284], 10.00th=[ 726], 20.00th=[ 776], 00:16:36.215 | 30.00th=[ 818], 40.00th=[ 844], 50.00th=[ 894], 60.00th=[ 953], 00:16:36.215 | 70.00th=[ 1011], 80.00th=[ 1083], 90.00th=[ 1133], 95.00th=[ 1183], 00:16:36.215 | 99.00th=[ 1234], 99.50th=[ 1250], 99.90th=[ 1250], 99.95th=[ 1250], 00:16:36.215 | 99.99th=[ 1250] 00:16:36.215 bw ( KiB/s): min= 7680, max=29184, per=2.52%, avg=16870.00, stdev=6845.07, samples=20 00:16:36.215 iops : min= 30, max= 114, avg=65.85, stdev=26.73, samples=20 00:16:36.215 lat (msec) : 50=0.28%, 100=1.38%, 250=0.97%, 500=4.70%, 750=5.67% 00:16:36.215 lat (msec) : 1000=55.60%, 2000=31.40% 00:16:36.215 cpu : usr=0.02%, sys=0.44%, ctx=139, majf=0, minf=4097 00:16:36.215 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.3% 00:16:36.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.215 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.2%, >=64=0.0% 00:16:36.215 issued rwts: total=723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.215 job10: (groupid=0, jobs=1): err= 0: pid=87184: Tue Nov 19 16:11:40 2024 00:16:36.215 read: IOPS=204, BW=51.0MiB/s (53.5MB/s)(515MiB/10090msec) 00:16:36.215 slat (usec): min=24, max=147879, avg=4853.43, stdev=12546.71 00:16:36.215 clat (msec): min=65, max=501, avg=308.21, stdev=47.04 00:16:36.215 lat (msec): min=65, max=602, avg=313.06, stdev=47.58 00:16:36.215 clat percentiles (msec): 00:16:36.215 | 1.00th=[ 132], 5.00th=[ 241], 10.00th=[ 266], 20.00th=[ 288], 00:16:36.215 | 30.00th=[ 296], 40.00th=[ 305], 50.00th=[ 309], 60.00th=[ 317], 00:16:36.215 | 70.00th=[ 321], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 372], 00:16:36.215 | 99.00th=[ 460], 99.50th=[ 489], 99.90th=[ 502], 99.95th=[ 502], 00:16:36.215 | 99.99th=[ 502] 00:16:36.215 bw ( KiB/s): min=35328, max=57856, per=7.62%, avg=51118.20, stdev=4487.71, samples=20 00:16:36.215 iops : min= 138, max= 226, avg=199.65, stdev=17.54, samples=20 00:16:36.215 lat (msec) : 100=0.87%, 250=5.78%, 500=93.20%, 750=0.15% 00:16:36.215 cpu : usr=0.17%, sys=0.92%, ctx=390, majf=0, minf=4097 00:16:36.215 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:16:36.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:36.215 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:36.215 issued rwts: total=2060,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:36.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:36.215 00:16:36.215 Run status group 0 (all jobs): 00:16:36.215 READ: bw=655MiB/s (687MB/s), 17.8MiB/s-245MiB/s (18.7MB/s-257MB/s), io=6653MiB (6976MB), run=10035-10157msec 00:16:36.215 00:16:36.215 Disk stats (read/write): 00:16:36.215 nvme0n1: ios=19531/0, merge=0/0, ticks=1235375/0, in_queue=1235375, util=97.45% 00:16:36.215 nvme10n1: ios=6696/0, merge=0/0, ticks=1231932/0, in_queue=1231932, util=97.67% 00:16:36.215 nvme1n1: ios=3990/0, merge=0/0, ticks=1222301/0, in_queue=1222301, util=97.91% 00:16:36.215 nvme2n1: ios=1359/0, merge=0/0, ticks=1177946/0, in_queue=1177946, util=98.10% 00:16:36.215 nvme3n1: ios=1380/0, merge=0/0, ticks=1178669/0, in_queue=1178669, util=98.07% 00:16:36.215 nvme4n1: ios=1388/0, merge=0/0, ticks=1191472/0, in_queue=1191472, util=98.19% 00:16:36.215 nvme5n1: ios=6851/0, merge=0/0, ticks=1233285/0, in_queue=1233285, util=98.50% 00:16:36.215 nvme6n1: ios=1333/0, merge=0/0, ticks=1187667/0, in_queue=1187667, util=98.43% 00:16:36.215 nvme7n1: ios=3941/0, merge=0/0, ticks=1226430/0, in_queue=1226430, util=98.72% 00:16:36.215 nvme8n1: ios=1323/0, merge=0/0, ticks=1183996/0, in_queue=1183996, util=98.88% 00:16:36.215 nvme9n1: ios=3995/0, merge=0/0, ticks=1224480/0, in_queue=1224480, util=99.03% 00:16:36.215 16:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:36.215 [global] 00:16:36.215 thread=1 00:16:36.215 invalidate=1 00:16:36.215 rw=randwrite 00:16:36.215 time_based=1 00:16:36.215 runtime=10 00:16:36.215 ioengine=libaio 00:16:36.215 direct=1 00:16:36.215 bs=262144 00:16:36.215 iodepth=64 00:16:36.215 norandommap=1 00:16:36.215 numjobs=1 00:16:36.215 00:16:36.215 [job0] 00:16:36.215 filename=/dev/nvme0n1 00:16:36.215 [job1] 00:16:36.215 filename=/dev/nvme10n1 00:16:36.215 [job2] 00:16:36.215 filename=/dev/nvme1n1 00:16:36.215 [job3] 00:16:36.215 filename=/dev/nvme2n1 00:16:36.215 [job4] 00:16:36.215 filename=/dev/nvme3n1 00:16:36.215 [job5] 00:16:36.215 filename=/dev/nvme4n1 00:16:36.215 [job6] 00:16:36.215 filename=/dev/nvme5n1 00:16:36.215 [job7] 00:16:36.215 filename=/dev/nvme6n1 00:16:36.215 [job8] 00:16:36.215 filename=/dev/nvme7n1 00:16:36.215 [job9] 00:16:36.215 filename=/dev/nvme8n1 00:16:36.215 [job10] 00:16:36.215 filename=/dev/nvme9n1 00:16:36.215 Could not set queue depth (nvme0n1) 00:16:36.215 Could not set queue depth (nvme10n1) 00:16:36.215 Could not set queue depth (nvme1n1) 00:16:36.216 Could not set queue depth (nvme2n1) 00:16:36.216 Could not set queue depth (nvme3n1) 00:16:36.216 Could not set queue depth (nvme4n1) 00:16:36.216 Could not set queue depth (nvme5n1) 00:16:36.216 Could not set queue depth (nvme6n1) 00:16:36.216 Could not set queue depth (nvme7n1) 00:16:36.216 Could not set queue depth (nvme8n1) 00:16:36.216 Could not set queue depth (nvme9n1) 00:16:36.216 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:36.216 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:36.216 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:36.216 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:36.216 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:36.216 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:36.216 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:36.216 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:36.216 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:36.216 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:36.216 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:36.216 fio-3.35 00:16:36.216 Starting 11 threads 00:16:46.214 00:16:46.214 job0: (groupid=0, jobs=1): err= 0: pid=87386: Tue Nov 19 16:11:51 2024 00:16:46.214 write: IOPS=188, BW=47.0MiB/s (49.3MB/s)(478MiB/10175msec); 0 zone resets 00:16:46.214 slat (usec): min=17, max=164593, avg=5075.54, stdev=10020.29 00:16:46.214 clat (msec): min=30, max=461, avg=335.08, stdev=66.82 00:16:46.214 lat (msec): min=30, max=461, avg=340.16, stdev=67.09 00:16:46.214 clat percentiles (msec): 00:16:46.214 | 1.00th=[ 107], 5.00th=[ 222], 10.00th=[ 271], 20.00th=[ 284], 00:16:46.214 | 30.00th=[ 292], 40.00th=[ 326], 50.00th=[ 359], 60.00th=[ 372], 00:16:46.214 | 70.00th=[ 384], 80.00th=[ 388], 90.00th=[ 397], 95.00th=[ 414], 00:16:46.214 | 99.00th=[ 439], 99.50th=[ 447], 99.90th=[ 460], 99.95th=[ 460], 00:16:46.214 | 99.99th=[ 460] 00:16:46.214 bw ( KiB/s): min=39936, max=57344, per=4.44%, avg=47351.40, stdev=6695.08, samples=20 00:16:46.214 iops : min= 156, max= 224, avg=184.95, stdev=26.16, samples=20 00:16:46.214 lat (msec) : 50=0.37%, 100=0.63%, 250=4.23%, 500=94.77% 00:16:46.214 cpu : usr=0.35%, sys=0.52%, ctx=2101, majf=0, minf=1 00:16:46.214 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:16:46.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.214 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:46.214 issued rwts: total=0,1913,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.214 job1: (groupid=0, jobs=1): err= 0: pid=87387: Tue Nov 19 16:11:51 2024 00:16:46.214 write: IOPS=339, BW=84.9MiB/s (89.1MB/s)(864MiB/10176msec); 0 zone resets 00:16:46.214 slat (usec): min=16, max=44457, avg=2871.51, stdev=5288.25 00:16:46.214 clat (msec): min=17, max=457, avg=185.40, stdev=56.65 00:16:46.214 lat (msec): min=17, max=458, avg=188.27, stdev=57.25 00:16:46.214 clat percentiles (msec): 00:16:46.214 | 1.00th=[ 79], 5.00th=[ 99], 10.00th=[ 146], 20.00th=[ 161], 00:16:46.214 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 171], 60.00th=[ 171], 00:16:46.214 | 70.00th=[ 174], 80.00th=[ 255], 90.00th=[ 279], 95.00th=[ 284], 00:16:46.214 | 99.00th=[ 305], 99.50th=[ 388], 99.90th=[ 439], 99.95th=[ 460], 00:16:46.214 | 99.99th=[ 460] 00:16:46.214 bw ( KiB/s): min=57344, max=137490, per=8.15%, avg=86900.10, stdev=21897.45, samples=20 00:16:46.214 iops : min= 224, max= 537, avg=339.45, stdev=85.53, samples=20 00:16:46.214 lat (msec) : 20=0.12%, 50=0.49%, 100=6.91%, 250=71.42%, 500=21.06% 00:16:46.214 cpu : usr=0.53%, sys=0.88%, ctx=4853, majf=0, minf=1 00:16:46.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:46.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:46.214 issued rwts: total=0,3457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.214 job2: (groupid=0, jobs=1): err= 0: pid=87399: Tue Nov 19 16:11:51 2024 00:16:46.214 write: IOPS=821, BW=205MiB/s (215MB/s)(2065MiB/10051msec); 0 zone resets 00:16:46.214 slat (usec): min=16, max=131632, avg=1206.21, stdev=2593.58 00:16:46.214 clat (msec): min=47, max=269, avg=76.64, stdev=27.12 00:16:46.214 lat (msec): min=50, max=269, avg=77.85, stdev=27.44 00:16:46.214 clat percentiles (msec): 00:16:46.214 | 1.00th=[ 54], 5.00th=[ 55], 10.00th=[ 55], 20.00th=[ 56], 00:16:46.214 | 30.00th=[ 58], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 61], 00:16:46.214 | 70.00th=[ 99], 80.00th=[ 104], 90.00th=[ 105], 95.00th=[ 106], 00:16:46.214 | 99.00th=[ 159], 99.50th=[ 176], 99.90th=[ 251], 99.95th=[ 259], 00:16:46.214 | 99.99th=[ 271] 00:16:46.214 bw ( KiB/s): min=79872, max=284160, per=19.68%, avg=209868.80, stdev=69814.63, samples=20 00:16:46.214 iops : min= 312, max= 1110, avg=819.80, stdev=272.71, samples=20 00:16:46.214 lat (msec) : 50=0.01%, 100=71.25%, 250=28.60%, 500=0.13% 00:16:46.214 cpu : usr=1.24%, sys=1.78%, ctx=9359, majf=0, minf=1 00:16:46.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:46.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:46.214 issued rwts: total=0,8261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.214 job3: (groupid=0, jobs=1): err= 0: pid=87400: Tue Nov 19 16:11:51 2024 00:16:46.214 write: IOPS=329, BW=82.5MiB/s (86.5MB/s)(840MiB/10182msec); 0 zone resets 00:16:46.214 slat (usec): min=19, max=141282, avg=2923.52, stdev=5842.02 00:16:46.214 clat (msec): min=6, max=454, avg=190.86, stdev=54.33 00:16:46.214 lat (msec): min=6, max=454, avg=193.79, stdev=54.86 00:16:46.214 clat percentiles (msec): 00:16:46.214 | 1.00th=[ 38], 5.00th=[ 148], 10.00th=[ 157], 20.00th=[ 161], 00:16:46.214 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 171], 60.00th=[ 174], 00:16:46.214 | 70.00th=[ 174], 80.00th=[ 259], 90.00th=[ 279], 95.00th=[ 288], 00:16:46.214 | 99.00th=[ 317], 99.50th=[ 384], 99.90th=[ 435], 99.95th=[ 456], 00:16:46.214 | 99.99th=[ 456] 00:16:46.214 bw ( KiB/s): min=57344, max=116736, per=7.91%, avg=84382.65, stdev=19259.69, samples=20 00:16:46.214 iops : min= 224, max= 456, avg=329.60, stdev=75.21, samples=20 00:16:46.214 lat (msec) : 10=0.09%, 20=0.09%, 50=1.10%, 100=0.74%, 250=75.48% 00:16:46.214 lat (msec) : 500=22.50% 00:16:46.214 cpu : usr=0.61%, sys=0.78%, ctx=3926, majf=0, minf=2 00:16:46.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:16:46.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:46.214 issued rwts: total=0,3360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.214 job4: (groupid=0, jobs=1): err= 0: pid=87401: Tue Nov 19 16:11:51 2024 00:16:46.214 write: IOPS=343, BW=85.9MiB/s (90.1MB/s)(874MiB/10175msec); 0 zone resets 00:16:46.214 slat (usec): min=19, max=74868, avg=2782.10, stdev=5315.60 00:16:46.214 clat (msec): min=7, max=454, avg=183.40, stdev=56.67 00:16:46.214 lat (msec): min=7, max=455, avg=186.18, stdev=57.37 00:16:46.214 clat percentiles (msec): 00:16:46.214 | 1.00th=[ 89], 5.00th=[ 100], 10.00th=[ 113], 20.00th=[ 159], 00:16:46.214 | 30.00th=[ 161], 40.00th=[ 167], 50.00th=[ 171], 60.00th=[ 171], 00:16:46.214 | 70.00th=[ 174], 80.00th=[ 255], 90.00th=[ 279], 95.00th=[ 284], 00:16:46.214 | 99.00th=[ 305], 99.50th=[ 384], 99.90th=[ 435], 99.95th=[ 456], 00:16:46.214 | 99.99th=[ 456] 00:16:46.214 bw ( KiB/s): min=57344, max=135168, per=8.24%, avg=87884.80, stdev=23203.37, samples=20 00:16:46.214 iops : min= 224, max= 528, avg=343.30, stdev=90.64, samples=20 00:16:46.214 lat (msec) : 10=0.06%, 50=0.06%, 100=7.35%, 250=71.68%, 500=20.85% 00:16:46.214 cpu : usr=0.61%, sys=0.90%, ctx=5102, majf=0, minf=1 00:16:46.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:16:46.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:46.214 issued rwts: total=0,3496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.214 job5: (groupid=0, jobs=1): err= 0: pid=87402: Tue Nov 19 16:11:51 2024 00:16:46.214 write: IOPS=548, BW=137MiB/s (144MB/s)(1386MiB/10101msec); 0 zone resets 00:16:46.214 slat (usec): min=17, max=66116, avg=1783.65, stdev=3227.50 00:16:46.214 clat (msec): min=4, max=228, avg=114.77, stdev=21.60 00:16:46.214 lat (msec): min=4, max=228, avg=116.55, stdev=21.73 00:16:46.214 clat percentiles (msec): 00:16:46.214 | 1.00th=[ 46], 5.00th=[ 96], 10.00th=[ 97], 20.00th=[ 102], 00:16:46.214 | 30.00th=[ 104], 40.00th=[ 105], 50.00th=[ 105], 60.00th=[ 123], 00:16:46.214 | 70.00th=[ 128], 80.00th=[ 131], 90.00th=[ 134], 95.00th=[ 142], 00:16:46.214 | 99.00th=[ 203], 99.50th=[ 211], 99.90th=[ 220], 99.95th=[ 220], 00:16:46.214 | 99.99th=[ 228] 00:16:46.214 bw ( KiB/s): min=90112, max=185344, per=13.16%, avg=140339.20, stdev=22162.41, samples=20 00:16:46.214 iops : min= 352, max= 724, avg=548.20, stdev=86.57, samples=20 00:16:46.214 lat (msec) : 10=0.16%, 20=0.22%, 50=0.76%, 100=16.03%, 250=82.83% 00:16:46.214 cpu : usr=0.94%, sys=1.41%, ctx=7393, majf=0, minf=1 00:16:46.214 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:46.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:46.214 issued rwts: total=0,5545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.214 job6: (groupid=0, jobs=1): err= 0: pid=87403: Tue Nov 19 16:11:51 2024 00:16:46.214 write: IOPS=188, BW=47.2MiB/s (49.5MB/s)(481MiB/10180msec); 0 zone resets 00:16:46.214 slat (usec): min=18, max=102528, avg=5198.74, stdev=9883.02 00:16:46.214 clat (msec): min=39, max=464, avg=333.30, stdev=71.10 00:16:46.214 lat (msec): min=39, max=464, avg=338.49, stdev=71.60 00:16:46.214 clat percentiles (msec): 00:16:46.214 | 1.00th=[ 105], 5.00th=[ 184], 10.00th=[ 266], 20.00th=[ 279], 00:16:46.214 | 30.00th=[ 288], 40.00th=[ 305], 50.00th=[ 359], 60.00th=[ 376], 00:16:46.214 | 70.00th=[ 384], 80.00th=[ 393], 90.00th=[ 405], 95.00th=[ 418], 00:16:46.214 | 99.00th=[ 439], 99.50th=[ 456], 99.90th=[ 464], 99.95th=[ 464], 00:16:46.214 | 99.99th=[ 464] 00:16:46.214 bw ( KiB/s): min=38912, max=63488, per=4.46%, avg=47611.70, stdev=8027.87, samples=20 00:16:46.214 iops : min= 152, max= 248, avg=185.95, stdev=31.38, samples=20 00:16:46.214 lat (msec) : 50=0.21%, 100=0.62%, 250=6.50%, 500=92.67% 00:16:46.214 cpu : usr=0.34%, sys=0.62%, ctx=1930, majf=0, minf=1 00:16:46.214 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:16:46.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.214 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:46.214 issued rwts: total=0,1924,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.214 job7: (groupid=0, jobs=1): err= 0: pid=87404: Tue Nov 19 16:11:51 2024 00:16:46.214 write: IOPS=195, BW=48.9MiB/s (51.3MB/s)(498MiB/10185msec); 0 zone resets 00:16:46.214 slat (usec): min=18, max=84759, avg=4787.52, stdev=9496.94 00:16:46.215 clat (msec): min=30, max=463, avg=322.30, stdev=87.25 00:16:46.215 lat (msec): min=30, max=463, avg=327.09, stdev=88.43 00:16:46.215 clat percentiles (msec): 00:16:46.215 | 1.00th=[ 56], 5.00th=[ 136], 10.00th=[ 228], 20.00th=[ 268], 00:16:46.215 | 30.00th=[ 279], 40.00th=[ 288], 50.00th=[ 355], 60.00th=[ 376], 00:16:46.215 | 70.00th=[ 388], 80.00th=[ 401], 90.00th=[ 409], 95.00th=[ 418], 00:16:46.215 | 99.00th=[ 439], 99.50th=[ 451], 99.90th=[ 460], 99.95th=[ 464], 00:16:46.215 | 99.99th=[ 464] 00:16:46.215 bw ( KiB/s): min=36352, max=82944, per=4.63%, avg=49359.10, stdev=12108.41, samples=20 00:16:46.215 iops : min= 142, max= 324, avg=192.75, stdev=47.29, samples=20 00:16:46.215 lat (msec) : 50=0.75%, 100=2.51%, 250=9.09%, 500=87.65% 00:16:46.215 cpu : usr=0.42%, sys=0.48%, ctx=2379, majf=0, minf=1 00:16:46.215 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:16:46.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.215 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:46.215 issued rwts: total=0,1992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.215 job8: (groupid=0, jobs=1): err= 0: pid=87405: Tue Nov 19 16:11:51 2024 00:16:46.215 write: IOPS=186, BW=46.6MiB/s (48.8MB/s)(474MiB/10175msec); 0 zone resets 00:16:46.215 slat (usec): min=16, max=111147, avg=5275.72, stdev=10020.53 00:16:46.215 clat (msec): min=34, max=451, avg=338.02, stdev=68.11 00:16:46.215 lat (msec): min=34, max=451, avg=343.30, stdev=68.56 00:16:46.215 clat percentiles (msec): 00:16:46.215 | 1.00th=[ 100], 5.00th=[ 255], 10.00th=[ 271], 20.00th=[ 284], 00:16:46.215 | 30.00th=[ 292], 40.00th=[ 309], 50.00th=[ 363], 60.00th=[ 380], 00:16:46.215 | 70.00th=[ 393], 80.00th=[ 397], 90.00th=[ 405], 95.00th=[ 414], 00:16:46.215 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 451], 99.95th=[ 451], 00:16:46.215 | 99.99th=[ 451] 00:16:46.215 bw ( KiB/s): min=38912, max=59392, per=4.40%, avg=46903.50, stdev=7743.76, samples=20 00:16:46.215 iops : min= 152, max= 232, avg=183.20, stdev=30.26, samples=20 00:16:46.215 lat (msec) : 50=0.21%, 100=0.84%, 250=3.64%, 500=95.31% 00:16:46.215 cpu : usr=0.38%, sys=0.57%, ctx=1901, majf=0, minf=1 00:16:46.215 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:16:46.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.215 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:46.215 issued rwts: total=0,1896,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.215 job9: (groupid=0, jobs=1): err= 0: pid=87406: Tue Nov 19 16:11:51 2024 00:16:46.215 write: IOPS=203, BW=50.9MiB/s (53.4MB/s)(518MiB/10171msec); 0 zone resets 00:16:46.215 slat (usec): min=16, max=84646, avg=4629.22, stdev=8850.97 00:16:46.215 clat (msec): min=6, max=459, avg=309.54, stdev=80.83 00:16:46.215 lat (msec): min=6, max=459, avg=314.17, stdev=81.75 00:16:46.215 clat percentiles (msec): 00:16:46.215 | 1.00th=[ 47], 5.00th=[ 100], 10.00th=[ 220], 20.00th=[ 275], 00:16:46.215 | 30.00th=[ 288], 40.00th=[ 300], 50.00th=[ 338], 60.00th=[ 351], 00:16:46.215 | 70.00th=[ 363], 80.00th=[ 372], 90.00th=[ 384], 95.00th=[ 388], 00:16:46.215 | 99.00th=[ 401], 99.50th=[ 405], 99.90th=[ 439], 99.95th=[ 460], 00:16:46.215 | 99.99th=[ 460] 00:16:46.215 bw ( KiB/s): min=40960, max=103936, per=4.82%, avg=51404.80, stdev=13697.57, samples=20 00:16:46.215 iops : min= 160, max= 406, avg=200.80, stdev=53.51, samples=20 00:16:46.215 lat (msec) : 10=0.05%, 20=0.14%, 50=0.97%, 100=4.06%, 250=5.41% 00:16:46.215 lat (msec) : 500=89.38% 00:16:46.215 cpu : usr=0.34%, sys=0.61%, ctx=2508, majf=0, minf=1 00:16:46.215 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:16:46.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:46.215 issued rwts: total=0,2071,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.215 job10: (groupid=0, jobs=1): err= 0: pid=87407: Tue Nov 19 16:11:51 2024 00:16:46.215 write: IOPS=843, BW=211MiB/s (221MB/s)(2129MiB/10100msec); 0 zone resets 00:16:46.215 slat (usec): min=17, max=59506, avg=1165.91, stdev=2309.90 00:16:46.215 clat (msec): min=48, max=231, avg=74.71, stdev=34.61 00:16:46.215 lat (msec): min=48, max=231, avg=75.88, stdev=35.07 00:16:46.215 clat percentiles (msec): 00:16:46.215 | 1.00th=[ 50], 5.00th=[ 51], 10.00th=[ 51], 20.00th=[ 52], 00:16:46.215 | 30.00th=[ 53], 40.00th=[ 54], 50.00th=[ 55], 60.00th=[ 55], 00:16:46.215 | 70.00th=[ 58], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 133], 00:16:46.215 | 99.00th=[ 142], 99.50th=[ 165], 99.90th=[ 215], 99.95th=[ 224], 00:16:46.215 | 99.99th=[ 232] 00:16:46.215 bw ( KiB/s): min=122880, max=307200, per=20.29%, avg=216396.80, stdev=88645.41, samples=20 00:16:46.215 iops : min= 480, max= 1200, avg=845.30, stdev=346.27, samples=20 00:16:46.215 lat (msec) : 50=2.62%, 100=69.89%, 250=27.49% 00:16:46.215 cpu : usr=1.21%, sys=2.17%, ctx=10818, majf=0, minf=1 00:16:46.215 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:46.215 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:46.215 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:46.215 issued rwts: total=0,8516,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:46.215 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:46.215 00:16:46.215 Run status group 0 (all jobs): 00:16:46.215 WRITE: bw=1042MiB/s (1092MB/s), 46.6MiB/s-211MiB/s (48.8MB/s-221MB/s), io=10.4GiB (11.1GB), run=10051-10185msec 00:16:46.215 00:16:46.215 Disk stats (read/write): 00:16:46.215 nvme0n1: ios=50/3696, merge=0/0, ticks=45/1204371, in_queue=1204416, util=97.84% 00:16:46.215 nvme10n1: ios=49/6782, merge=0/0, ticks=48/1203827, in_queue=1203875, util=97.89% 00:16:46.215 nvme1n1: ios=45/16339, merge=0/0, ticks=43/1216561, in_queue=1216604, util=97.95% 00:16:46.215 nvme2n1: ios=27/6588, merge=0/0, ticks=41/1204805, in_queue=1204846, util=98.08% 00:16:46.215 nvme3n1: ios=27/6860, merge=0/0, ticks=34/1205044, in_queue=1205078, util=98.11% 00:16:46.215 nvme4n1: ios=0/10943, merge=0/0, ticks=0/1213048, in_queue=1213048, util=98.20% 00:16:46.215 nvme5n1: ios=0/3715, merge=0/0, ticks=0/1204571, in_queue=1204571, util=98.41% 00:16:46.215 nvme6n1: ios=0/3850, merge=0/0, ticks=0/1206743, in_queue=1206743, util=98.48% 00:16:46.215 nvme7n1: ios=0/3656, merge=0/0, ticks=0/1203614, in_queue=1203614, util=98.68% 00:16:46.215 nvme8n1: ios=0/4011, merge=0/0, ticks=0/1206011, in_queue=1206011, util=98.80% 00:16:46.215 nvme9n1: ios=0/16889, merge=0/0, ticks=0/1212390, in_queue=1212390, util=98.87% 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:46.215 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.215 16:11:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:46.215 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:46.215 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:46.215 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.215 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.215 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:16:46.215 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:46.216 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:46.216 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:46.216 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:46.216 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:46.216 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:46.216 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:46.216 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.216 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:46.217 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:46.217 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.217 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.217 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.217 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:46.217 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:46.217 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:46.217 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:46.217 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.217 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.217 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:16:46.475 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:46.476 16:11:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:46.476 rmmod nvme_tcp 00:16:46.476 rmmod nvme_fabrics 00:16:46.476 rmmod nvme_keyring 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 86712 ']' 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 86712 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 86712 ']' 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 86712 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86712 00:16:46.476 killing process with pid 86712 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86712' 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 86712 00:16:46.476 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 86712 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:46.735 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:16:46.994 00:16:46.994 real 0m49.530s 00:16:46.994 user 2m48.902s 00:16:46.994 sys 0m26.342s 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:46.994 ************************************ 00:16:46.994 END TEST nvmf_multiconnection 00:16:46.994 ************************************ 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:46.994 ************************************ 00:16:46.994 START TEST nvmf_initiator_timeout 00:16:46.994 ************************************ 00:16:46.994 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:47.254 * Looking for test storage... 00:16:47.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.254 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:47.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.254 --rc genhtml_branch_coverage=1 00:16:47.254 --rc genhtml_function_coverage=1 00:16:47.254 --rc genhtml_legend=1 00:16:47.254 --rc geninfo_all_blocks=1 00:16:47.254 --rc geninfo_unexecuted_blocks=1 00:16:47.254 00:16:47.254 ' 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:47.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.255 --rc genhtml_branch_coverage=1 00:16:47.255 --rc genhtml_function_coverage=1 00:16:47.255 --rc genhtml_legend=1 00:16:47.255 --rc geninfo_all_blocks=1 00:16:47.255 --rc geninfo_unexecuted_blocks=1 00:16:47.255 00:16:47.255 ' 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:47.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.255 --rc genhtml_branch_coverage=1 00:16:47.255 --rc genhtml_function_coverage=1 00:16:47.255 --rc genhtml_legend=1 00:16:47.255 --rc geninfo_all_blocks=1 00:16:47.255 --rc geninfo_unexecuted_blocks=1 00:16:47.255 00:16:47.255 ' 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:47.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.255 --rc genhtml_branch_coverage=1 00:16:47.255 --rc genhtml_function_coverage=1 00:16:47.255 --rc genhtml_legend=1 00:16:47.255 --rc geninfo_all_blocks=1 00:16:47.255 --rc geninfo_unexecuted_blocks=1 00:16:47.255 00:16:47.255 ' 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:47.255 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:47.255 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:47.256 Cannot find device "nvmf_init_br" 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:47.256 Cannot find device "nvmf_init_br2" 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:47.256 Cannot find device "nvmf_tgt_br" 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:47.256 Cannot find device "nvmf_tgt_br2" 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:47.256 Cannot find device "nvmf_init_br" 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:16:47.256 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:47.515 Cannot find device "nvmf_init_br2" 00:16:47.515 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:16:47.515 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:47.515 Cannot find device "nvmf_tgt_br" 00:16:47.515 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:16:47.515 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:47.515 Cannot find device "nvmf_tgt_br2" 00:16:47.515 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:16:47.515 16:11:53 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:47.515 Cannot find device "nvmf_br" 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:47.515 Cannot find device "nvmf_init_if" 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:47.515 Cannot find device "nvmf_init_if2" 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.515 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:47.515 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:47.516 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:47.775 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:47.776 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:47.776 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:16:47.776 00:16:47.776 --- 10.0.0.3 ping statistics --- 00:16:47.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.776 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:47.776 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:47.776 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:16:47.776 00:16:47.776 --- 10.0.0.4 ping statistics --- 00:16:47.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.776 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:47.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:16:47.776 00:16:47.776 --- 10.0.0.1 ping statistics --- 00:16:47.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.776 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:47.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:16:47.776 00:16:47.776 --- 10.0.0.2 ping statistics --- 00:16:47.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.776 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=87826 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 87826 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 87826 ']' 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.776 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:47.776 [2024-11-19 16:11:54.335181] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:16:47.776 [2024-11-19 16:11:54.335306] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.035 [2024-11-19 16:11:54.489164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:48.035 [2024-11-19 16:11:54.513932] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:48.035 [2024-11-19 16:11:54.513998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:48.035 [2024-11-19 16:11:54.514013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:48.035 [2024-11-19 16:11:54.514023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:48.035 [2024-11-19 16:11:54.514032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:48.035 [2024-11-19 16:11:54.514949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:48.035 [2024-11-19 16:11:54.515046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.035 [2024-11-19 16:11:54.515135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:48.035 [2024-11-19 16:11:54.515136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.035 [2024-11-19 16:11:54.550550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:48.035 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:48.036 Malloc0 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:48.036 Delay0 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:48.036 [2024-11-19 16:11:54.687532] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:48.036 [2024-11-19 16:11:54.720400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.036 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:48.295 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:48.295 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:16:48.295 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.295 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:48.295 16:11:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:16:50.315 16:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:50.315 16:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:50.315 16:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:50.315 16:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:50.315 16:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.315 16:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:16:50.315 16:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=87883 00:16:50.315 16:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:50.315 16:11:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:50.315 [global] 00:16:50.315 thread=1 00:16:50.315 invalidate=1 00:16:50.315 rw=write 00:16:50.315 time_based=1 00:16:50.315 runtime=60 00:16:50.315 ioengine=libaio 00:16:50.315 direct=1 00:16:50.315 bs=4096 00:16:50.315 iodepth=1 00:16:50.315 norandommap=0 00:16:50.315 numjobs=1 00:16:50.315 00:16:50.315 verify_dump=1 00:16:50.315 verify_backlog=512 00:16:50.315 verify_state_save=0 00:16:50.315 do_verify=1 00:16:50.315 verify=crc32c-intel 00:16:50.315 [job0] 00:16:50.315 filename=/dev/nvme0n1 00:16:50.315 Could not set queue depth (nvme0n1) 00:16:50.574 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:50.574 fio-3.35 00:16:50.574 Starting 1 thread 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.859 true 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.859 true 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.859 true 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:53.859 true 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.859 16:11:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:56.391 true 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:56.391 true 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:56.391 true 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:56.391 true 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:16:56.391 16:12:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 87883 00:17:52.626 00:17:52.626 job0: (groupid=0, jobs=1): err= 0: pid=87904: Tue Nov 19 16:12:57 2024 00:17:52.626 read: IOPS=817, BW=3271KiB/s (3350kB/s)(192MiB/60000msec) 00:17:52.626 slat (nsec): min=10716, max=77310, avg=14571.28, stdev=4403.47 00:17:52.626 clat (usec): min=156, max=6472, avg=203.62, stdev=37.94 00:17:52.626 lat (usec): min=168, max=6485, avg=218.19, stdev=38.65 00:17:52.626 clat percentiles (usec): 00:17:52.626 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 186], 00:17:52.626 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 200], 60.00th=[ 206], 00:17:52.626 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 235], 95.00th=[ 245], 00:17:52.626 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 306], 99.95th=[ 318], 00:17:52.626 | 99.99th=[ 947] 00:17:52.626 write: IOPS=819, BW=3277KiB/s (3355kB/s)(192MiB/60000msec); 0 zone resets 00:17:52.626 slat (usec): min=12, max=13124, avg=21.75, stdev=76.83 00:17:52.626 clat (usec): min=113, max=40437k, avg=977.80, stdev=182393.89 00:17:52.626 lat (usec): min=130, max=40437k, avg=999.55, stdev=182393.89 00:17:52.626 clat percentiles (usec): 00:17:52.626 | 1.00th=[ 122], 5.00th=[ 127], 10.00th=[ 130], 20.00th=[ 137], 00:17:52.626 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 157], 00:17:52.626 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 194], 00:17:52.626 | 99.00th=[ 217], 99.50th=[ 227], 99.90th=[ 262], 99.95th=[ 416], 00:17:52.626 | 99.99th=[ 4686] 00:17:52.626 bw ( KiB/s): min= 4544, max=12288, per=100.00%, avg=9872.41, stdev=1803.30, samples=39 00:17:52.626 iops : min= 1136, max= 3072, avg=2468.10, stdev=450.83, samples=39 00:17:52.626 lat (usec) : 250=98.08%, 500=1.90%, 750=0.01%, 1000=0.01% 00:17:52.626 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:17:52.626 cpu : usr=0.56%, sys=2.31%, ctx=98237, majf=0, minf=5 00:17:52.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:52.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.626 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:52.626 issued rwts: total=49072,49152,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:52.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:52.626 00:17:52.626 Run status group 0 (all jobs): 00:17:52.626 READ: bw=3271KiB/s (3350kB/s), 3271KiB/s-3271KiB/s (3350kB/s-3350kB/s), io=192MiB (201MB), run=60000-60000msec 00:17:52.626 WRITE: bw=3277KiB/s (3355kB/s), 3277KiB/s-3277KiB/s (3355kB/s-3355kB/s), io=192MiB (201MB), run=60000-60000msec 00:17:52.626 00:17:52.626 Disk stats (read/write): 00:17:52.626 nvme0n1: ios=48908/49101, merge=0/0, ticks=10303/8123, in_queue=18426, util=99.81% 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:52.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:52.626 nvmf hotplug test: fio successful as expected 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:52.626 rmmod nvme_tcp 00:17:52.626 rmmod nvme_fabrics 00:17:52.626 rmmod nvme_keyring 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 87826 ']' 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 87826 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 87826 ']' 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 87826 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87826 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.626 killing process with pid 87826 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87826' 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 87826 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 87826 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:52.626 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:17:52.627 00:17:52.627 real 1m4.031s 00:17:52.627 user 3m47.785s 00:17:52.627 sys 0m24.275s 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.627 ************************************ 00:17:52.627 END TEST nvmf_initiator_timeout 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:52.627 ************************************ 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.627 ************************************ 00:17:52.627 START TEST nvmf_nsid 00:17:52.627 ************************************ 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:17:52.627 * Looking for test storage... 00:17:52.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.627 --rc genhtml_branch_coverage=1 00:17:52.627 --rc genhtml_function_coverage=1 00:17:52.627 --rc genhtml_legend=1 00:17:52.627 --rc geninfo_all_blocks=1 00:17:52.627 --rc geninfo_unexecuted_blocks=1 00:17:52.627 00:17:52.627 ' 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.627 --rc genhtml_branch_coverage=1 00:17:52.627 --rc genhtml_function_coverage=1 00:17:52.627 --rc genhtml_legend=1 00:17:52.627 --rc geninfo_all_blocks=1 00:17:52.627 --rc geninfo_unexecuted_blocks=1 00:17:52.627 00:17:52.627 ' 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.627 --rc genhtml_branch_coverage=1 00:17:52.627 --rc genhtml_function_coverage=1 00:17:52.627 --rc genhtml_legend=1 00:17:52.627 --rc geninfo_all_blocks=1 00:17:52.627 --rc geninfo_unexecuted_blocks=1 00:17:52.627 00:17:52.627 ' 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:52.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.627 --rc genhtml_branch_coverage=1 00:17:52.627 --rc genhtml_function_coverage=1 00:17:52.627 --rc genhtml_legend=1 00:17:52.627 --rc geninfo_all_blocks=1 00:17:52.627 --rc geninfo_unexecuted_blocks=1 00:17:52.627 00:17:52.627 ' 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.627 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.628 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:52.628 Cannot find device "nvmf_init_br" 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:52.628 Cannot find device "nvmf_init_br2" 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:17:52.628 16:12:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:52.628 Cannot find device "nvmf_tgt_br" 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:52.628 Cannot find device "nvmf_tgt_br2" 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:52.628 Cannot find device "nvmf_init_br" 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:52.628 Cannot find device "nvmf_init_br2" 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:52.628 Cannot find device "nvmf_tgt_br" 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:52.628 Cannot find device "nvmf_tgt_br2" 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:52.628 Cannot find device "nvmf_br" 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:52.628 Cannot find device "nvmf_init_if" 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:52.628 Cannot find device "nvmf_init_if2" 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:52.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:52.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:52.628 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:52.629 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:52.629 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:17:52.629 00:17:52.629 --- 10.0.0.3 ping statistics --- 00:17:52.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.629 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:52.629 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:52.629 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:17:52.629 00:17:52.629 --- 10.0.0.4 ping statistics --- 00:17:52.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.629 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:52.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:52.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:52.629 00:17:52.629 --- 10.0.0.1 ping statistics --- 00:17:52.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.629 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:52.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:52.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:17:52.629 00:17:52.629 --- 10.0.0.2 ping statistics --- 00:17:52.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:52.629 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=88776 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 88776 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 88776 ']' 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:52.629 [2024-11-19 16:12:58.443716] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:17:52.629 [2024-11-19 16:12:58.443824] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:52.629 [2024-11-19 16:12:58.588832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.629 [2024-11-19 16:12:58.607863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:52.629 [2024-11-19 16:12:58.607947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:52.629 [2024-11-19 16:12:58.607974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:52.629 [2024-11-19 16:12:58.607981] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:52.629 [2024-11-19 16:12:58.607987] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:52.629 [2024-11-19 16:12:58.608314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.629 [2024-11-19 16:12:58.636710] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=88805 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=cd92ee7c-9c57-4c9f-a74c-2a9f728e7a77 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=ae5d7fbd-773a-43ca-a79c-ba58bbc3b6e5 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=6ca3dc4a-2764-49b7-8050-459e6d386992 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:52.629 null0 00:17:52.629 null1 00:17:52.629 null2 00:17:52.629 [2024-11-19 16:12:58.812919] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.629 [2024-11-19 16:12:58.829811] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:17:52.629 [2024-11-19 16:12:58.829894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88805 ] 00:17:52.629 [2024-11-19 16:12:58.837020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:52.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 88805 /var/tmp/tgt2.sock 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 88805 ']' 00:17:52.629 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:17:52.630 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.630 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:17:52.630 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.630 16:12:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:52.630 [2024-11-19 16:12:58.985519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.630 [2024-11-19 16:12:59.010221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.630 [2024-11-19 16:12:59.053350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:52.630 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:52.630 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:17:52.630 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:17:53.197 [2024-11-19 16:12:59.608634] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.197 [2024-11-19 16:12:59.624680] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:17:53.197 nvme0n1 nvme0n2 00:17:53.197 nvme1n1 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:17:53.197 16:12:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:17:54.132 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:54.132 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:54.132 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:54.132 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:54.132 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid cd92ee7c-9c57-4c9f-a74c-2a9f728e7a77 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cd92ee7c9c574c9fa74c2a9f728e7a77 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CD92EE7C9C574C9FA74C2A9F728E7A77 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ CD92EE7C9C574C9FA74C2A9F728E7A77 == \C\D\9\2\E\E\7\C\9\C\5\7\4\C\9\F\A\7\4\C\2\A\9\F\7\2\8\E\7\A\7\7 ]] 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:17:54.391 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid ae5d7fbd-773a-43ca-a79c-ba58bbc3b6e5 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ae5d7fbd773a43caa79cba58bbc3b6e5 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo AE5D7FBD773A43CAA79CBA58BBC3B6E5 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ AE5D7FBD773A43CAA79CBA58BBC3B6E5 == \A\E\5\D\7\F\B\D\7\7\3\A\4\3\C\A\A\7\9\C\B\A\5\8\B\B\C\3\B\6\E\5 ]] 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:54.392 16:13:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:17:54.392 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:17:54.392 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 6ca3dc4a-2764-49b7-8050-459e6d386992 00:17:54.392 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:17:54.392 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:17:54.392 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:17:54.392 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:17:54.392 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:17:54.392 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=6ca3dc4a276449b78050459e6d386992 00:17:54.392 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 6CA3DC4A276449B78050459E6D386992 00:17:54.392 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 6CA3DC4A276449B78050459E6D386992 == \6\C\A\3\D\C\4\A\2\7\6\4\4\9\B\7\8\0\5\0\4\5\9\E\6\D\3\8\6\9\9\2 ]] 00:17:54.392 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 88805 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 88805 ']' 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 88805 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88805 00:17:54.651 killing process with pid 88805 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88805' 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 88805 00:17:54.651 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 88805 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:54.910 rmmod nvme_tcp 00:17:54.910 rmmod nvme_fabrics 00:17:54.910 rmmod nvme_keyring 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 88776 ']' 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 88776 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 88776 ']' 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 88776 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:54.910 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88776 00:17:55.170 killing process with pid 88776 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88776' 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 88776 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 88776 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:55.170 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:55.429 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:55.429 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:55.429 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:55.429 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:55.429 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.429 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.429 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.429 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:17:55.429 00:17:55.429 real 0m4.228s 00:17:55.429 user 0m6.360s 00:17:55.429 sys 0m1.517s 00:17:55.429 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.429 16:13:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:17:55.429 ************************************ 00:17:55.429 END TEST nvmf_nsid 00:17:55.429 ************************************ 00:17:55.429 16:13:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:55.429 00:17:55.429 real 6m49.461s 00:17:55.429 user 16m55.135s 00:17:55.429 sys 1m57.278s 00:17:55.429 16:13:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:55.429 16:13:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:55.429 ************************************ 00:17:55.429 END TEST nvmf_target_extra 00:17:55.429 ************************************ 00:17:55.429 16:13:02 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:55.429 16:13:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:55.429 16:13:02 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.429 16:13:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.429 ************************************ 00:17:55.429 START TEST nvmf_host 00:17:55.429 ************************************ 00:17:55.429 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:55.689 * Looking for test storage... 00:17:55.689 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:55.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.689 --rc genhtml_branch_coverage=1 00:17:55.689 --rc genhtml_function_coverage=1 00:17:55.689 --rc genhtml_legend=1 00:17:55.689 --rc geninfo_all_blocks=1 00:17:55.689 --rc geninfo_unexecuted_blocks=1 00:17:55.689 00:17:55.689 ' 00:17:55.689 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:55.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.689 --rc genhtml_branch_coverage=1 00:17:55.689 --rc genhtml_function_coverage=1 00:17:55.689 --rc genhtml_legend=1 00:17:55.689 --rc geninfo_all_blocks=1 00:17:55.689 --rc geninfo_unexecuted_blocks=1 00:17:55.689 00:17:55.689 ' 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:55.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.690 --rc genhtml_branch_coverage=1 00:17:55.690 --rc genhtml_function_coverage=1 00:17:55.690 --rc genhtml_legend=1 00:17:55.690 --rc geninfo_all_blocks=1 00:17:55.690 --rc geninfo_unexecuted_blocks=1 00:17:55.690 00:17:55.690 ' 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:55.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.690 --rc genhtml_branch_coverage=1 00:17:55.690 --rc genhtml_function_coverage=1 00:17:55.690 --rc genhtml_legend=1 00:17:55.690 --rc geninfo_all_blocks=1 00:17:55.690 --rc geninfo_unexecuted_blocks=1 00:17:55.690 00:17:55.690 ' 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:55.690 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:55.690 ************************************ 00:17:55.690 START TEST nvmf_identify 00:17:55.690 ************************************ 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:55.690 * Looking for test storage... 00:17:55.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:17:55.690 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:55.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.951 --rc genhtml_branch_coverage=1 00:17:55.951 --rc genhtml_function_coverage=1 00:17:55.951 --rc genhtml_legend=1 00:17:55.951 --rc geninfo_all_blocks=1 00:17:55.951 --rc geninfo_unexecuted_blocks=1 00:17:55.951 00:17:55.951 ' 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:55.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.951 --rc genhtml_branch_coverage=1 00:17:55.951 --rc genhtml_function_coverage=1 00:17:55.951 --rc genhtml_legend=1 00:17:55.951 --rc geninfo_all_blocks=1 00:17:55.951 --rc geninfo_unexecuted_blocks=1 00:17:55.951 00:17:55.951 ' 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:55.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.951 --rc genhtml_branch_coverage=1 00:17:55.951 --rc genhtml_function_coverage=1 00:17:55.951 --rc genhtml_legend=1 00:17:55.951 --rc geninfo_all_blocks=1 00:17:55.951 --rc geninfo_unexecuted_blocks=1 00:17:55.951 00:17:55.951 ' 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:55.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:55.951 --rc genhtml_branch_coverage=1 00:17:55.951 --rc genhtml_function_coverage=1 00:17:55.951 --rc genhtml_legend=1 00:17:55.951 --rc geninfo_all_blocks=1 00:17:55.951 --rc geninfo_unexecuted_blocks=1 00:17:55.951 00:17:55.951 ' 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.951 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:55.952 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:55.952 Cannot find device "nvmf_init_br" 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:55.952 Cannot find device "nvmf_init_br2" 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:55.952 Cannot find device "nvmf_tgt_br" 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:55.952 Cannot find device "nvmf_tgt_br2" 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:55.952 Cannot find device "nvmf_init_br" 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:55.952 Cannot find device "nvmf_init_br2" 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:55.952 Cannot find device "nvmf_tgt_br" 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:55.952 Cannot find device "nvmf_tgt_br2" 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:55.952 Cannot find device "nvmf_br" 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:55.952 Cannot find device "nvmf_init_if" 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:55.952 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:56.212 Cannot find device "nvmf_init_if2" 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:56.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:56.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:56.212 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:56.212 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:17:56.212 00:17:56.212 --- 10.0.0.3 ping statistics --- 00:17:56.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.212 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:56.212 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:56.212 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:56.212 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:56.212 00:17:56.212 --- 10.0.0.4 ping statistics --- 00:17:56.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.212 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:56.213 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:56.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:56.213 00:17:56.213 --- 10.0.0.1 ping statistics --- 00:17:56.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.213 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:56.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:17:56.472 00:17:56.472 --- 10.0.0.2 ping statistics --- 00:17:56.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.472 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=89154 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 89154 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 89154 ']' 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.472 16:13:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.472 [2024-11-19 16:13:03.026266] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:17:56.472 [2024-11-19 16:13:03.026367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.472 [2024-11-19 16:13:03.182128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:56.732 [2024-11-19 16:13:03.210165] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.732 [2024-11-19 16:13:03.210233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.732 [2024-11-19 16:13:03.210265] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.732 [2024-11-19 16:13:03.210276] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.732 [2024-11-19 16:13:03.210285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.732 [2024-11-19 16:13:03.211192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.732 [2024-11-19 16:13:03.211334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.732 [2024-11-19 16:13:03.211877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:56.732 [2024-11-19 16:13:03.211889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.732 [2024-11-19 16:13:03.272300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.732 [2024-11-19 16:13:03.338608] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.732 Malloc0 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.732 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:56.994 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.994 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.994 [2024-11-19 16:13:03.448613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:56.994 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.994 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:56.994 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.994 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.994 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.994 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:56.994 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.994 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:56.994 [ 00:17:56.994 { 00:17:56.994 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:56.994 "subtype": "Discovery", 00:17:56.994 "listen_addresses": [ 00:17:56.994 { 00:17:56.994 "trtype": "TCP", 00:17:56.994 "adrfam": "IPv4", 00:17:56.994 "traddr": "10.0.0.3", 00:17:56.994 "trsvcid": "4420" 00:17:56.994 } 00:17:56.994 ], 00:17:56.994 "allow_any_host": true, 00:17:56.994 "hosts": [] 00:17:56.994 }, 00:17:56.994 { 00:17:56.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.994 "subtype": "NVMe", 00:17:56.994 "listen_addresses": [ 00:17:56.994 { 00:17:56.995 "trtype": "TCP", 00:17:56.995 "adrfam": "IPv4", 00:17:56.995 "traddr": "10.0.0.3", 00:17:56.995 "trsvcid": "4420" 00:17:56.995 } 00:17:56.995 ], 00:17:56.995 "allow_any_host": true, 00:17:56.995 "hosts": [], 00:17:56.995 "serial_number": "SPDK00000000000001", 00:17:56.995 "model_number": "SPDK bdev Controller", 00:17:56.995 "max_namespaces": 32, 00:17:56.995 "min_cntlid": 1, 00:17:56.995 "max_cntlid": 65519, 00:17:56.995 "namespaces": [ 00:17:56.995 { 00:17:56.995 "nsid": 1, 00:17:56.995 "bdev_name": "Malloc0", 00:17:56.995 "name": "Malloc0", 00:17:56.995 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:56.995 "eui64": "ABCDEF0123456789", 00:17:56.995 "uuid": "f448fcaa-98dd-4cf2-a713-168ddc8d147e" 00:17:56.995 } 00:17:56.995 ] 00:17:56.995 } 00:17:56.995 ] 00:17:56.995 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.995 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:56.995 [2024-11-19 16:13:03.503931] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:17:56.995 [2024-11-19 16:13:03.504006] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89180 ] 00:17:56.995 [2024-11-19 16:13:03.663041] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:17:56.995 [2024-11-19 16:13:03.663131] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:56.995 [2024-11-19 16:13:03.663139] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:56.995 [2024-11-19 16:13:03.663155] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:56.995 [2024-11-19 16:13:03.663171] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:56.995 [2024-11-19 16:13:03.663672] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:17:56.995 [2024-11-19 16:13:03.663735] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x713b00 0 00:17:56.995 [2024-11-19 16:13:03.676372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:56.995 [2024-11-19 16:13:03.676446] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:56.995 [2024-11-19 16:13:03.676453] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:56.995 [2024-11-19 16:13:03.676456] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:56.995 [2024-11-19 16:13:03.676497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.676504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.676508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x713b00) 00:17:56.995 [2024-11-19 16:13:03.676526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:56.995 [2024-11-19 16:13:03.676567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x759fc0, cid 0, qid 0 00:17:56.995 [2024-11-19 16:13:03.684337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.995 [2024-11-19 16:13:03.684375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.995 [2024-11-19 16:13:03.684397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x759fc0) on tqpair=0x713b00 00:17:56.995 [2024-11-19 16:13:03.684465] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:56.995 [2024-11-19 16:13:03.684476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:17:56.995 [2024-11-19 16:13:03.684482] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:17:56.995 [2024-11-19 16:13:03.684508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684514] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684518] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x713b00) 00:17:56.995 [2024-11-19 16:13:03.684534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.995 [2024-11-19 16:13:03.684571] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x759fc0, cid 0, qid 0 00:17:56.995 [2024-11-19 16:13:03.684641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.995 [2024-11-19 16:13:03.684648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.995 [2024-11-19 16:13:03.684657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x759fc0) on tqpair=0x713b00 00:17:56.995 [2024-11-19 16:13:03.684667] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:17:56.995 [2024-11-19 16:13:03.684674] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:17:56.995 [2024-11-19 16:13:03.684682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x713b00) 00:17:56.995 [2024-11-19 16:13:03.684698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.995 [2024-11-19 16:13:03.684718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x759fc0, cid 0, qid 0 00:17:56.995 [2024-11-19 16:13:03.684799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.995 [2024-11-19 16:13:03.684807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.995 [2024-11-19 16:13:03.684810] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x759fc0) on tqpair=0x713b00 00:17:56.995 [2024-11-19 16:13:03.684821] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:17:56.995 [2024-11-19 16:13:03.684830] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:56.995 [2024-11-19 16:13:03.684838] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684847] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x713b00) 00:17:56.995 [2024-11-19 16:13:03.684854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.995 [2024-11-19 16:13:03.684875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x759fc0, cid 0, qid 0 00:17:56.995 [2024-11-19 16:13:03.684918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.995 [2024-11-19 16:13:03.684926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.995 [2024-11-19 16:13:03.684930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x759fc0) on tqpair=0x713b00 00:17:56.995 [2024-11-19 16:13:03.684941] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:56.995 [2024-11-19 16:13:03.684951] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.684960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x713b00) 00:17:56.995 [2024-11-19 16:13:03.684968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.995 [2024-11-19 16:13:03.684988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x759fc0, cid 0, qid 0 00:17:56.995 [2024-11-19 16:13:03.685063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.995 [2024-11-19 16:13:03.685071] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.995 [2024-11-19 16:13:03.685075] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.685080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x759fc0) on tqpair=0x713b00 00:17:56.995 [2024-11-19 16:13:03.685086] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:56.995 [2024-11-19 16:13:03.685091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:56.995 [2024-11-19 16:13:03.685100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:56.995 [2024-11-19 16:13:03.685213] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:17:56.995 [2024-11-19 16:13:03.685220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:56.995 [2024-11-19 16:13:03.685230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.995 [2024-11-19 16:13:03.685235] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x713b00) 00:17:56.996 [2024-11-19 16:13:03.685247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.996 [2024-11-19 16:13:03.685271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x759fc0, cid 0, qid 0 00:17:56.996 [2024-11-19 16:13:03.685323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.996 [2024-11-19 16:13:03.685331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.996 [2024-11-19 16:13:03.685335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685339] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x759fc0) on tqpair=0x713b00 00:17:56.996 [2024-11-19 16:13:03.685351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:56.996 [2024-11-19 16:13:03.685377] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x713b00) 00:17:56.996 [2024-11-19 16:13:03.685396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.996 [2024-11-19 16:13:03.685419] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x759fc0, cid 0, qid 0 00:17:56.996 [2024-11-19 16:13:03.685466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.996 [2024-11-19 16:13:03.685474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.996 [2024-11-19 16:13:03.685478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x759fc0) on tqpair=0x713b00 00:17:56.996 [2024-11-19 16:13:03.685503] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:56.996 [2024-11-19 16:13:03.685508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:56.996 [2024-11-19 16:13:03.685517] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:17:56.996 [2024-11-19 16:13:03.685528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:56.996 [2024-11-19 16:13:03.685540] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x713b00) 00:17:56.996 [2024-11-19 16:13:03.685553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.996 [2024-11-19 16:13:03.685575] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x759fc0, cid 0, qid 0 00:17:56.996 [2024-11-19 16:13:03.685685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.996 [2024-11-19 16:13:03.685692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.996 [2024-11-19 16:13:03.685697] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685701] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x713b00): datao=0, datal=4096, cccid=0 00:17:56.996 [2024-11-19 16:13:03.685706] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x759fc0) on tqpair(0x713b00): expected_datao=0, payload_size=4096 00:17:56.996 [2024-11-19 16:13:03.685711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685720] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685725] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.996 [2024-11-19 16:13:03.685741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.996 [2024-11-19 16:13:03.685744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685748] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x759fc0) on tqpair=0x713b00 00:17:56.996 [2024-11-19 16:13:03.685758] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:17:56.996 [2024-11-19 16:13:03.685763] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:17:56.996 [2024-11-19 16:13:03.685768] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:17:56.996 [2024-11-19 16:13:03.685778] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:17:56.996 [2024-11-19 16:13:03.685783] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:17:56.996 [2024-11-19 16:13:03.685789] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:17:56.996 [2024-11-19 16:13:03.685800] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:56.996 [2024-11-19 16:13:03.685809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x713b00) 00:17:56.996 [2024-11-19 16:13:03.685826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:56.996 [2024-11-19 16:13:03.685848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x759fc0, cid 0, qid 0 00:17:56.996 [2024-11-19 16:13:03.685906] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.996 [2024-11-19 16:13:03.685914] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.996 [2024-11-19 16:13:03.685918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x759fc0) on tqpair=0x713b00 00:17:56.996 [2024-11-19 16:13:03.685930] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685938] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x713b00) 00:17:56.996 [2024-11-19 16:13:03.685946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.996 [2024-11-19 16:13:03.685952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685960] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x713b00) 00:17:56.996 [2024-11-19 16:13:03.685966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.996 [2024-11-19 16:13:03.685972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x713b00) 00:17:56.996 [2024-11-19 16:13:03.685985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.996 [2024-11-19 16:13:03.685991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.685999] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x713b00) 00:17:56.996 [2024-11-19 16:13:03.686004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.996 [2024-11-19 16:13:03.686010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:56.996 [2024-11-19 16:13:03.686034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:56.996 [2024-11-19 16:13:03.686058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.686063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x713b00) 00:17:56.996 [2024-11-19 16:13:03.686070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.996 [2024-11-19 16:13:03.686094] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x759fc0, cid 0, qid 0 00:17:56.996 [2024-11-19 16:13:03.686103] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a140, cid 1, qid 0 00:17:56.996 [2024-11-19 16:13:03.686108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a2c0, cid 2, qid 0 00:17:56.996 [2024-11-19 16:13:03.686113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a440, cid 3, qid 0 00:17:56.996 [2024-11-19 16:13:03.686118] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a5c0, cid 4, qid 0 00:17:56.996 [2024-11-19 16:13:03.686210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.996 [2024-11-19 16:13:03.686217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.996 [2024-11-19 16:13:03.686221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.996 [2024-11-19 16:13:03.686226] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a5c0) on tqpair=0x713b00 00:17:56.996 [2024-11-19 16:13:03.686241] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:17:56.996 [2024-11-19 16:13:03.686247] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:17:56.997 [2024-11-19 16:13:03.686260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x713b00) 00:17:56.997 [2024-11-19 16:13:03.686273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.997 [2024-11-19 16:13:03.686309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a5c0, cid 4, qid 0 00:17:56.997 [2024-11-19 16:13:03.686359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.997 [2024-11-19 16:13:03.686367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.997 [2024-11-19 16:13:03.686371] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686375] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x713b00): datao=0, datal=4096, cccid=4 00:17:56.997 [2024-11-19 16:13:03.686380] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x75a5c0) on tqpair(0x713b00): expected_datao=0, payload_size=4096 00:17:56.997 [2024-11-19 16:13:03.686400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686408] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686426] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.997 [2024-11-19 16:13:03.686441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.997 [2024-11-19 16:13:03.686444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a5c0) on tqpair=0x713b00 00:17:56.997 [2024-11-19 16:13:03.686463] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:17:56.997 [2024-11-19 16:13:03.686520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686532] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x713b00) 00:17:56.997 [2024-11-19 16:13:03.686541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.997 [2024-11-19 16:13:03.686550] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x713b00) 00:17:56.997 [2024-11-19 16:13:03.686564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:56.997 [2024-11-19 16:13:03.686598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a5c0, cid 4, qid 0 00:17:56.997 [2024-11-19 16:13:03.686607] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a740, cid 5, qid 0 00:17:56.997 [2024-11-19 16:13:03.686754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.997 [2024-11-19 16:13:03.686761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.997 [2024-11-19 16:13:03.686766] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686769] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x713b00): datao=0, datal=1024, cccid=4 00:17:56.997 [2024-11-19 16:13:03.686774] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x75a5c0) on tqpair(0x713b00): expected_datao=0, payload_size=1024 00:17:56.997 [2024-11-19 16:13:03.686779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686786] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686790] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.997 [2024-11-19 16:13:03.686802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.997 [2024-11-19 16:13:03.686805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686810] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a740) on tqpair=0x713b00 00:17:56.997 [2024-11-19 16:13:03.686830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.997 [2024-11-19 16:13:03.686869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.997 [2024-11-19 16:13:03.686874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a5c0) on tqpair=0x713b00 00:17:56.997 [2024-11-19 16:13:03.686902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.686908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x713b00) 00:17:56.997 [2024-11-19 16:13:03.686916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.997 [2024-11-19 16:13:03.686946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a5c0, cid 4, qid 0 00:17:56.997 [2024-11-19 16:13:03.687029] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.997 [2024-11-19 16:13:03.687037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.997 [2024-11-19 16:13:03.687041] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.687045] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x713b00): datao=0, datal=3072, cccid=4 00:17:56.997 [2024-11-19 16:13:03.687050] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x75a5c0) on tqpair(0x713b00): expected_datao=0, payload_size=3072 00:17:56.997 [2024-11-19 16:13:03.687055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.687063] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.687067] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.687076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.997 [2024-11-19 16:13:03.687083] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.997 [2024-11-19 16:13:03.687086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.687091] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a5c0) on tqpair=0x713b00 00:17:56.997 [2024-11-19 16:13:03.687101] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.687106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x713b00) 00:17:56.997 [2024-11-19 16:13:03.687114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.997 [2024-11-19 16:13:03.687141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a5c0, cid 4, qid 0 00:17:56.997 [2024-11-19 16:13:03.687208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:56.997 [2024-11-19 16:13:03.687216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:56.997 [2024-11-19 16:13:03.687220] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.687224] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x713b00): datao=0, datal=8, cccid=4 00:17:56.997 [2024-11-19 16:13:03.687229] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x75a5c0) on tqpair(0x713b00): expected_datao=0, payload_size=8 00:17:56.997 [2024-11-19 16:13:03.687234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.687256] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.687262] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.687281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.997 [2024-11-19 16:13:03.687290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.997 [2024-11-19 16:13:03.687294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.997 [2024-11-19 16:13:03.687299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a5c0) on tqpair=0x713b00 00:17:56.997 ===================================================== 00:17:56.997 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:56.997 ===================================================== 00:17:56.997 Controller Capabilities/Features 00:17:56.997 ================================ 00:17:56.997 Vendor ID: 0000 00:17:56.997 Subsystem Vendor ID: 0000 00:17:56.997 Serial Number: .................... 00:17:56.997 Model Number: ........................................ 00:17:56.997 Firmware Version: 25.01 00:17:56.997 Recommended Arb Burst: 0 00:17:56.997 IEEE OUI Identifier: 00 00 00 00:17:56.997 Multi-path I/O 00:17:56.997 May have multiple subsystem ports: No 00:17:56.997 May have multiple controllers: No 00:17:56.997 Associated with SR-IOV VF: No 00:17:56.997 Max Data Transfer Size: 131072 00:17:56.997 Max Number of Namespaces: 0 00:17:56.997 Max Number of I/O Queues: 1024 00:17:56.997 NVMe Specification Version (VS): 1.3 00:17:56.997 NVMe Specification Version (Identify): 1.3 00:17:56.997 Maximum Queue Entries: 128 00:17:56.997 Contiguous Queues Required: Yes 00:17:56.997 Arbitration Mechanisms Supported 00:17:56.997 Weighted Round Robin: Not Supported 00:17:56.997 Vendor Specific: Not Supported 00:17:56.997 Reset Timeout: 15000 ms 00:17:56.997 Doorbell Stride: 4 bytes 00:17:56.997 NVM Subsystem Reset: Not Supported 00:17:56.997 Command Sets Supported 00:17:56.998 NVM Command Set: Supported 00:17:56.998 Boot Partition: Not Supported 00:17:56.998 Memory Page Size Minimum: 4096 bytes 00:17:56.998 Memory Page Size Maximum: 4096 bytes 00:17:56.998 Persistent Memory Region: Not Supported 00:17:56.998 Optional Asynchronous Events Supported 00:17:56.998 Namespace Attribute Notices: Not Supported 00:17:56.998 Firmware Activation Notices: Not Supported 00:17:56.998 ANA Change Notices: Not Supported 00:17:56.998 PLE Aggregate Log Change Notices: Not Supported 00:17:56.998 LBA Status Info Alert Notices: Not Supported 00:17:56.998 EGE Aggregate Log Change Notices: Not Supported 00:17:56.998 Normal NVM Subsystem Shutdown event: Not Supported 00:17:56.998 Zone Descriptor Change Notices: Not Supported 00:17:56.998 Discovery Log Change Notices: Supported 00:17:56.998 Controller Attributes 00:17:56.998 128-bit Host Identifier: Not Supported 00:17:56.998 Non-Operational Permissive Mode: Not Supported 00:17:56.998 NVM Sets: Not Supported 00:17:56.998 Read Recovery Levels: Not Supported 00:17:56.998 Endurance Groups: Not Supported 00:17:56.998 Predictable Latency Mode: Not Supported 00:17:56.998 Traffic Based Keep ALive: Not Supported 00:17:56.998 Namespace Granularity: Not Supported 00:17:56.998 SQ Associations: Not Supported 00:17:56.998 UUID List: Not Supported 00:17:56.998 Multi-Domain Subsystem: Not Supported 00:17:56.998 Fixed Capacity Management: Not Supported 00:17:56.998 Variable Capacity Management: Not Supported 00:17:56.998 Delete Endurance Group: Not Supported 00:17:56.998 Delete NVM Set: Not Supported 00:17:56.998 Extended LBA Formats Supported: Not Supported 00:17:56.998 Flexible Data Placement Supported: Not Supported 00:17:56.998 00:17:56.998 Controller Memory Buffer Support 00:17:56.998 ================================ 00:17:56.998 Supported: No 00:17:56.998 00:17:56.998 Persistent Memory Region Support 00:17:56.998 ================================ 00:17:56.998 Supported: No 00:17:56.998 00:17:56.998 Admin Command Set Attributes 00:17:56.998 ============================ 00:17:56.998 Security Send/Receive: Not Supported 00:17:56.998 Format NVM: Not Supported 00:17:56.998 Firmware Activate/Download: Not Supported 00:17:56.998 Namespace Management: Not Supported 00:17:56.998 Device Self-Test: Not Supported 00:17:56.998 Directives: Not Supported 00:17:56.998 NVMe-MI: Not Supported 00:17:56.998 Virtualization Management: Not Supported 00:17:56.998 Doorbell Buffer Config: Not Supported 00:17:56.998 Get LBA Status Capability: Not Supported 00:17:56.998 Command & Feature Lockdown Capability: Not Supported 00:17:56.998 Abort Command Limit: 1 00:17:56.998 Async Event Request Limit: 4 00:17:56.998 Number of Firmware Slots: N/A 00:17:56.998 Firmware Slot 1 Read-Only: N/A 00:17:56.998 Firmware Activation Without Reset: N/A 00:17:56.998 Multiple Update Detection Support: N/A 00:17:56.998 Firmware Update Granularity: No Information Provided 00:17:56.998 Per-Namespace SMART Log: No 00:17:56.998 Asymmetric Namespace Access Log Page: Not Supported 00:17:56.998 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:56.998 Command Effects Log Page: Not Supported 00:17:56.998 Get Log Page Extended Data: Supported 00:17:56.998 Telemetry Log Pages: Not Supported 00:17:56.998 Persistent Event Log Pages: Not Supported 00:17:56.998 Supported Log Pages Log Page: May Support 00:17:56.998 Commands Supported & Effects Log Page: Not Supported 00:17:56.998 Feature Identifiers & Effects Log Page:May Support 00:17:56.998 NVMe-MI Commands & Effects Log Page: May Support 00:17:56.998 Data Area 4 for Telemetry Log: Not Supported 00:17:56.998 Error Log Page Entries Supported: 128 00:17:56.998 Keep Alive: Not Supported 00:17:56.998 00:17:56.998 NVM Command Set Attributes 00:17:56.998 ========================== 00:17:56.998 Submission Queue Entry Size 00:17:56.998 Max: 1 00:17:56.998 Min: 1 00:17:56.998 Completion Queue Entry Size 00:17:56.998 Max: 1 00:17:56.998 Min: 1 00:17:56.998 Number of Namespaces: 0 00:17:56.998 Compare Command: Not Supported 00:17:56.998 Write Uncorrectable Command: Not Supported 00:17:56.998 Dataset Management Command: Not Supported 00:17:56.998 Write Zeroes Command: Not Supported 00:17:56.998 Set Features Save Field: Not Supported 00:17:56.998 Reservations: Not Supported 00:17:56.998 Timestamp: Not Supported 00:17:56.998 Copy: Not Supported 00:17:56.998 Volatile Write Cache: Not Present 00:17:56.998 Atomic Write Unit (Normal): 1 00:17:56.998 Atomic Write Unit (PFail): 1 00:17:56.998 Atomic Compare & Write Unit: 1 00:17:56.998 Fused Compare & Write: Supported 00:17:56.998 Scatter-Gather List 00:17:56.998 SGL Command Set: Supported 00:17:56.998 SGL Keyed: Supported 00:17:56.998 SGL Bit Bucket Descriptor: Not Supported 00:17:56.998 SGL Metadata Pointer: Not Supported 00:17:56.998 Oversized SGL: Not Supported 00:17:56.998 SGL Metadata Address: Not Supported 00:17:56.998 SGL Offset: Supported 00:17:56.998 Transport SGL Data Block: Not Supported 00:17:56.998 Replay Protected Memory Block: Not Supported 00:17:56.998 00:17:56.998 Firmware Slot Information 00:17:56.998 ========================= 00:17:56.998 Active slot: 0 00:17:56.998 00:17:56.998 00:17:56.998 Error Log 00:17:56.998 ========= 00:17:56.998 00:17:56.998 Active Namespaces 00:17:56.998 ================= 00:17:56.998 Discovery Log Page 00:17:56.998 ================== 00:17:56.998 Generation Counter: 2 00:17:56.998 Number of Records: 2 00:17:56.998 Record Format: 0 00:17:56.998 00:17:56.998 Discovery Log Entry 0 00:17:56.998 ---------------------- 00:17:56.998 Transport Type: 3 (TCP) 00:17:56.998 Address Family: 1 (IPv4) 00:17:56.998 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:56.998 Entry Flags: 00:17:56.998 Duplicate Returned Information: 1 00:17:56.998 Explicit Persistent Connection Support for Discovery: 1 00:17:56.998 Transport Requirements: 00:17:56.998 Secure Channel: Not Required 00:17:56.998 Port ID: 0 (0x0000) 00:17:56.998 Controller ID: 65535 (0xffff) 00:17:56.998 Admin Max SQ Size: 128 00:17:56.998 Transport Service Identifier: 4420 00:17:56.998 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:56.998 Transport Address: 10.0.0.3 00:17:56.998 Discovery Log Entry 1 00:17:56.998 ---------------------- 00:17:56.998 Transport Type: 3 (TCP) 00:17:56.998 Address Family: 1 (IPv4) 00:17:56.998 Subsystem Type: 2 (NVM Subsystem) 00:17:56.998 Entry Flags: 00:17:56.999 Duplicate Returned Information: 0 00:17:56.999 Explicit Persistent Connection Support for Discovery: 0 00:17:56.999 Transport Requirements: 00:17:56.999 Secure Channel: Not Required 00:17:56.999 Port ID: 0 (0x0000) 00:17:56.999 Controller ID: 65535 (0xffff) 00:17:56.999 Admin Max SQ Size: 128 00:17:56.999 Transport Service Identifier: 4420 00:17:56.999 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:56.999 Transport Address: 10.0.0.3 [2024-11-19 16:13:03.687430] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:17:56.999 [2024-11-19 16:13:03.687444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x759fc0) on tqpair=0x713b00 00:17:56.999 [2024-11-19 16:13:03.687452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.999 [2024-11-19 16:13:03.687457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a140) on tqpair=0x713b00 00:17:56.999 [2024-11-19 16:13:03.687462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.999 [2024-11-19 16:13:03.687467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a2c0) on tqpair=0x713b00 00:17:56.999 [2024-11-19 16:13:03.687472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.999 [2024-11-19 16:13:03.687477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a440) on tqpair=0x713b00 00:17:56.999 [2024-11-19 16:13:03.687482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:56.999 [2024-11-19 16:13:03.687494] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x713b00) 00:17:56.999 [2024-11-19 16:13:03.687511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.999 [2024-11-19 16:13:03.687536] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a440, cid 3, qid 0 00:17:56.999 [2024-11-19 16:13:03.687586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.999 [2024-11-19 16:13:03.687593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.999 [2024-11-19 16:13:03.687597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a440) on tqpair=0x713b00 00:17:56.999 [2024-11-19 16:13:03.687610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687618] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x713b00) 00:17:56.999 [2024-11-19 16:13:03.687626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.999 [2024-11-19 16:13:03.687650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a440, cid 3, qid 0 00:17:56.999 [2024-11-19 16:13:03.687715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.999 [2024-11-19 16:13:03.687722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.999 [2024-11-19 16:13:03.687726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a440) on tqpair=0x713b00 00:17:56.999 [2024-11-19 16:13:03.687735] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:17:56.999 [2024-11-19 16:13:03.687740] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:17:56.999 [2024-11-19 16:13:03.687751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x713b00) 00:17:56.999 [2024-11-19 16:13:03.687767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.999 [2024-11-19 16:13:03.687788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a440, cid 3, qid 0 00:17:56.999 [2024-11-19 16:13:03.687831] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.999 [2024-11-19 16:13:03.687838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.999 [2024-11-19 16:13:03.687842] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687846] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a440) on tqpair=0x713b00 00:17:56.999 [2024-11-19 16:13:03.687857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687862] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687866] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x713b00) 00:17:56.999 [2024-11-19 16:13:03.687873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.999 [2024-11-19 16:13:03.687893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a440, cid 3, qid 0 00:17:56.999 [2024-11-19 16:13:03.687938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.999 [2024-11-19 16:13:03.687945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.999 [2024-11-19 16:13:03.687949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a440) on tqpair=0x713b00 00:17:56.999 [2024-11-19 16:13:03.687965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687970] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.687974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x713b00) 00:17:56.999 [2024-11-19 16:13:03.687981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.999 [2024-11-19 16:13:03.688001] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a440, cid 3, qid 0 00:17:56.999 [2024-11-19 16:13:03.688083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.999 [2024-11-19 16:13:03.688091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.999 [2024-11-19 16:13:03.688095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.688099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a440) on tqpair=0x713b00 00:17:56.999 [2024-11-19 16:13:03.688111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.688116] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.688120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x713b00) 00:17:56.999 [2024-11-19 16:13:03.688128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.999 [2024-11-19 16:13:03.688149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a440, cid 3, qid 0 00:17:56.999 [2024-11-19 16:13:03.688191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.999 [2024-11-19 16:13:03.688199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.999 [2024-11-19 16:13:03.688203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.688207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a440) on tqpair=0x713b00 00:17:56.999 [2024-11-19 16:13:03.688219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.688224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.688228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x713b00) 00:17:56.999 [2024-11-19 16:13:03.688236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:56.999 [2024-11-19 16:13:03.688256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a440, cid 3, qid 0 00:17:56.999 [2024-11-19 16:13:03.692292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:56.999 [2024-11-19 16:13:03.692317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:56.999 [2024-11-19 16:13:03.692338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:56.999 [2024-11-19 16:13:03.692344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a440) on tqpair=0x713b00 00:17:56.999 [2024-11-19 16:13:03.692361] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.000 [2024-11-19 16:13:03.692367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.000 [2024-11-19 16:13:03.692371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x713b00) 00:17:57.000 [2024-11-19 16:13:03.692381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.000 [2024-11-19 16:13:03.692411] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x75a440, cid 3, qid 0 00:17:57.000 [2024-11-19 16:13:03.692468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.000 [2024-11-19 16:13:03.692475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.000 [2024-11-19 16:13:03.692479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.000 [2024-11-19 16:13:03.692483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x75a440) on tqpair=0x713b00 00:17:57.000 [2024-11-19 16:13:03.692492] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:17:57.262 00:17:57.262 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:57.262 [2024-11-19 16:13:03.739041] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:17:57.262 [2024-11-19 16:13:03.739104] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89183 ] 00:17:57.262 [2024-11-19 16:13:03.896736] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:17:57.262 [2024-11-19 16:13:03.896794] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:57.262 [2024-11-19 16:13:03.896801] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:57.262 [2024-11-19 16:13:03.896814] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:57.262 [2024-11-19 16:13:03.896827] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:57.262 [2024-11-19 16:13:03.897172] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:17:57.262 [2024-11-19 16:13:03.897233] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ab7b00 0 00:17:57.262 [2024-11-19 16:13:03.902273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:57.262 [2024-11-19 16:13:03.902298] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:57.262 [2024-11-19 16:13:03.902305] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:57.263 [2024-11-19 16:13:03.902309] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:57.263 [2024-11-19 16:13:03.902341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.902348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.902352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab7b00) 00:17:57.263 [2024-11-19 16:13:03.902366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:57.263 [2024-11-19 16:13:03.902399] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afdfc0, cid 0, qid 0 00:17:57.263 [2024-11-19 16:13:03.909258] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.263 [2024-11-19 16:13:03.909279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.263 [2024-11-19 16:13:03.909284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.909289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afdfc0) on tqpair=0x1ab7b00 00:17:57.263 [2024-11-19 16:13:03.909302] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:57.263 [2024-11-19 16:13:03.909310] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:17:57.263 [2024-11-19 16:13:03.909317] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:17:57.263 [2024-11-19 16:13:03.909334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.909339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.909343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab7b00) 00:17:57.263 [2024-11-19 16:13:03.909351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.263 [2024-11-19 16:13:03.909377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afdfc0, cid 0, qid 0 00:17:57.263 [2024-11-19 16:13:03.909537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.263 [2024-11-19 16:13:03.909553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.263 [2024-11-19 16:13:03.909557] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.909562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afdfc0) on tqpair=0x1ab7b00 00:17:57.263 [2024-11-19 16:13:03.909568] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:17:57.263 [2024-11-19 16:13:03.909577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:17:57.263 [2024-11-19 16:13:03.909585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.909590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.909594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab7b00) 00:17:57.263 [2024-11-19 16:13:03.909601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.263 [2024-11-19 16:13:03.909622] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afdfc0, cid 0, qid 0 00:17:57.263 [2024-11-19 16:13:03.910041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.263 [2024-11-19 16:13:03.910054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.263 [2024-11-19 16:13:03.910059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.910063] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afdfc0) on tqpair=0x1ab7b00 00:17:57.263 [2024-11-19 16:13:03.910070] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:17:57.263 [2024-11-19 16:13:03.910079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:17:57.263 [2024-11-19 16:13:03.910088] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.910092] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.910096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab7b00) 00:17:57.263 [2024-11-19 16:13:03.910103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.263 [2024-11-19 16:13:03.910123] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afdfc0, cid 0, qid 0 00:17:57.263 [2024-11-19 16:13:03.910233] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.263 [2024-11-19 16:13:03.910267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.263 [2024-11-19 16:13:03.910272] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.910276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afdfc0) on tqpair=0x1ab7b00 00:17:57.263 [2024-11-19 16:13:03.910282] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:57.263 [2024-11-19 16:13:03.910295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.910299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.910303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab7b00) 00:17:57.263 [2024-11-19 16:13:03.910311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.263 [2024-11-19 16:13:03.910333] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afdfc0, cid 0, qid 0 00:17:57.263 [2024-11-19 16:13:03.910655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.263 [2024-11-19 16:13:03.910668] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.263 [2024-11-19 16:13:03.910673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.910677] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afdfc0) on tqpair=0x1ab7b00 00:17:57.263 [2024-11-19 16:13:03.910683] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:17:57.263 [2024-11-19 16:13:03.910688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:17:57.263 [2024-11-19 16:13:03.910697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:57.263 [2024-11-19 16:13:03.910808] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:17:57.263 [2024-11-19 16:13:03.910814] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:57.263 [2024-11-19 16:13:03.910824] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.910828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.910832] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab7b00) 00:17:57.263 [2024-11-19 16:13:03.910866] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.263 [2024-11-19 16:13:03.910891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afdfc0, cid 0, qid 0 00:17:57.263 [2024-11-19 16:13:03.911362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.263 [2024-11-19 16:13:03.911392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.263 [2024-11-19 16:13:03.911396] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.911401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afdfc0) on tqpair=0x1ab7b00 00:17:57.263 [2024-11-19 16:13:03.911406] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:57.263 [2024-11-19 16:13:03.911418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.911422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.911426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab7b00) 00:17:57.263 [2024-11-19 16:13:03.911434] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.263 [2024-11-19 16:13:03.911468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afdfc0, cid 0, qid 0 00:17:57.263 [2024-11-19 16:13:03.911528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.263 [2024-11-19 16:13:03.911535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.263 [2024-11-19 16:13:03.911539] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.911543] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afdfc0) on tqpair=0x1ab7b00 00:17:57.263 [2024-11-19 16:13:03.911547] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:57.263 [2024-11-19 16:13:03.911553] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:17:57.263 [2024-11-19 16:13:03.911561] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:17:57.263 [2024-11-19 16:13:03.911572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:17:57.263 [2024-11-19 16:13:03.911582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.911586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab7b00) 00:17:57.263 [2024-11-19 16:13:03.911594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.263 [2024-11-19 16:13:03.911614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afdfc0, cid 0, qid 0 00:17:57.263 [2024-11-19 16:13:03.911923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.263 [2024-11-19 16:13:03.911938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.263 [2024-11-19 16:13:03.911943] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.911947] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab7b00): datao=0, datal=4096, cccid=0 00:17:57.263 [2024-11-19 16:13:03.911952] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afdfc0) on tqpair(0x1ab7b00): expected_datao=0, payload_size=4096 00:17:57.263 [2024-11-19 16:13:03.911957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.911965] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.911970] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.911978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.263 [2024-11-19 16:13:03.911984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.263 [2024-11-19 16:13:03.911988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.263 [2024-11-19 16:13:03.911992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afdfc0) on tqpair=0x1ab7b00 00:17:57.263 [2024-11-19 16:13:03.912000] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:17:57.264 [2024-11-19 16:13:03.912006] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:17:57.264 [2024-11-19 16:13:03.912011] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:17:57.264 [2024-11-19 16:13:03.912021] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:17:57.264 [2024-11-19 16:13:03.912026] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:17:57.264 [2024-11-19 16:13:03.912032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.912044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.912053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab7b00) 00:17:57.264 [2024-11-19 16:13:03.912069] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.264 [2024-11-19 16:13:03.912091] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afdfc0, cid 0, qid 0 00:17:57.264 [2024-11-19 16:13:03.912200] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.264 [2024-11-19 16:13:03.912207] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.264 [2024-11-19 16:13:03.912210] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afdfc0) on tqpair=0x1ab7b00 00:17:57.264 [2024-11-19 16:13:03.912222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912230] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ab7b00) 00:17:57.264 [2024-11-19 16:13:03.912248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.264 [2024-11-19 16:13:03.912256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912260] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ab7b00) 00:17:57.264 [2024-11-19 16:13:03.912270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.264 [2024-11-19 16:13:03.912293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ab7b00) 00:17:57.264 [2024-11-19 16:13:03.912307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.264 [2024-11-19 16:13:03.912313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.264 [2024-11-19 16:13:03.912326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.264 [2024-11-19 16:13:03.912332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.912342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.912349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab7b00) 00:17:57.264 [2024-11-19 16:13:03.912360] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.264 [2024-11-19 16:13:03.912382] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afdfc0, cid 0, qid 0 00:17:57.264 [2024-11-19 16:13:03.912390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe140, cid 1, qid 0 00:17:57.264 [2024-11-19 16:13:03.912395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe2c0, cid 2, qid 0 00:17:57.264 [2024-11-19 16:13:03.912400] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.264 [2024-11-19 16:13:03.912404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe5c0, cid 4, qid 0 00:17:57.264 [2024-11-19 16:13:03.912816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.264 [2024-11-19 16:13:03.912830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.264 [2024-11-19 16:13:03.912835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe5c0) on tqpair=0x1ab7b00 00:17:57.264 [2024-11-19 16:13:03.912849] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:17:57.264 [2024-11-19 16:13:03.912856] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.912880] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.912887] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.912894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.912901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab7b00) 00:17:57.264 [2024-11-19 16:13:03.912908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.264 [2024-11-19 16:13:03.912928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe5c0, cid 4, qid 0 00:17:57.264 [2024-11-19 16:13:03.913196] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.264 [2024-11-19 16:13:03.913210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.264 [2024-11-19 16:13:03.913215] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.913219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe5c0) on tqpair=0x1ab7b00 00:17:57.264 [2024-11-19 16:13:03.917311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.917337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.917348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.917352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab7b00) 00:17:57.264 [2024-11-19 16:13:03.917360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.264 [2024-11-19 16:13:03.917386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe5c0, cid 4, qid 0 00:17:57.264 [2024-11-19 16:13:03.917456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.264 [2024-11-19 16:13:03.917478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.264 [2024-11-19 16:13:03.917498] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.917502] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab7b00): datao=0, datal=4096, cccid=4 00:17:57.264 [2024-11-19 16:13:03.917506] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe5c0) on tqpair(0x1ab7b00): expected_datao=0, payload_size=4096 00:17:57.264 [2024-11-19 16:13:03.917511] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.917523] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.917528] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.917853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.264 [2024-11-19 16:13:03.917866] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.264 [2024-11-19 16:13:03.917871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.917875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe5c0) on tqpair=0x1ab7b00 00:17:57.264 [2024-11-19 16:13:03.917886] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:17:57.264 [2024-11-19 16:13:03.917902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.917914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.917923] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.917927] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab7b00) 00:17:57.264 [2024-11-19 16:13:03.917935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.264 [2024-11-19 16:13:03.917957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe5c0, cid 4, qid 0 00:17:57.264 [2024-11-19 16:13:03.918217] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.264 [2024-11-19 16:13:03.918231] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.264 [2024-11-19 16:13:03.918247] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.918253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab7b00): datao=0, datal=4096, cccid=4 00:17:57.264 [2024-11-19 16:13:03.918258] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe5c0) on tqpair(0x1ab7b00): expected_datao=0, payload_size=4096 00:17:57.264 [2024-11-19 16:13:03.918263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.918270] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.918274] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.918283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.264 [2024-11-19 16:13:03.918289] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.264 [2024-11-19 16:13:03.918293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.264 [2024-11-19 16:13:03.918297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe5c0) on tqpair=0x1ab7b00 00:17:57.264 [2024-11-19 16:13:03.918314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.918325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:57.264 [2024-11-19 16:13:03.918334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.918338] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab7b00) 00:17:57.265 [2024-11-19 16:13:03.918346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.265 [2024-11-19 16:13:03.918368] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe5c0, cid 4, qid 0 00:17:57.265 [2024-11-19 16:13:03.918733] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.265 [2024-11-19 16:13:03.918747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.265 [2024-11-19 16:13:03.918752] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.918755] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab7b00): datao=0, datal=4096, cccid=4 00:17:57.265 [2024-11-19 16:13:03.918760] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe5c0) on tqpair(0x1ab7b00): expected_datao=0, payload_size=4096 00:17:57.265 [2024-11-19 16:13:03.918765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.918772] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.918776] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.918785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.265 [2024-11-19 16:13:03.918791] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.265 [2024-11-19 16:13:03.918794] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.918799] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe5c0) on tqpair=0x1ab7b00 00:17:57.265 [2024-11-19 16:13:03.918807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:57.265 [2024-11-19 16:13:03.918816] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:17:57.265 [2024-11-19 16:13:03.918827] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:17:57.265 [2024-11-19 16:13:03.918862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:57.265 [2024-11-19 16:13:03.918869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:57.265 [2024-11-19 16:13:03.918890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:17:57.265 [2024-11-19 16:13:03.918896] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:17:57.265 [2024-11-19 16:13:03.918902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:17:57.265 [2024-11-19 16:13:03.918908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:17:57.265 [2024-11-19 16:13:03.918926] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.918933] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab7b00) 00:17:57.265 [2024-11-19 16:13:03.918941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.265 [2024-11-19 16:13:03.918949] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.918954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.918958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ab7b00) 00:17:57.265 [2024-11-19 16:13:03.918965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.265 [2024-11-19 16:13:03.918995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe5c0, cid 4, qid 0 00:17:57.265 [2024-11-19 16:13:03.919003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe740, cid 5, qid 0 00:17:57.265 [2024-11-19 16:13:03.919388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.265 [2024-11-19 16:13:03.919403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.265 [2024-11-19 16:13:03.919407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.919411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe5c0) on tqpair=0x1ab7b00 00:17:57.265 [2024-11-19 16:13:03.919418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.265 [2024-11-19 16:13:03.919424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.265 [2024-11-19 16:13:03.919428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.919432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe740) on tqpair=0x1ab7b00 00:17:57.265 [2024-11-19 16:13:03.919443] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.919447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ab7b00) 00:17:57.265 [2024-11-19 16:13:03.919454] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.265 [2024-11-19 16:13:03.919475] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe740, cid 5, qid 0 00:17:57.265 [2024-11-19 16:13:03.919579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.265 [2024-11-19 16:13:03.919589] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.265 [2024-11-19 16:13:03.919593] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.919597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe740) on tqpair=0x1ab7b00 00:17:57.265 [2024-11-19 16:13:03.919607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.919612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ab7b00) 00:17:57.265 [2024-11-19 16:13:03.919619] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.265 [2024-11-19 16:13:03.919636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe740, cid 5, qid 0 00:17:57.265 [2024-11-19 16:13:03.919995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.265 [2024-11-19 16:13:03.920008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.265 [2024-11-19 16:13:03.920013] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920017] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe740) on tqpair=0x1ab7b00 00:17:57.265 [2024-11-19 16:13:03.920028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ab7b00) 00:17:57.265 [2024-11-19 16:13:03.920039] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.265 [2024-11-19 16:13:03.920058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe740, cid 5, qid 0 00:17:57.265 [2024-11-19 16:13:03.920113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.265 [2024-11-19 16:13:03.920120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.265 [2024-11-19 16:13:03.920123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe740) on tqpair=0x1ab7b00 00:17:57.265 [2024-11-19 16:13:03.920146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ab7b00) 00:17:57.265 [2024-11-19 16:13:03.920159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.265 [2024-11-19 16:13:03.920166] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ab7b00) 00:17:57.265 [2024-11-19 16:13:03.920176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.265 [2024-11-19 16:13:03.920183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1ab7b00) 00:17:57.265 [2024-11-19 16:13:03.920193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.265 [2024-11-19 16:13:03.920201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ab7b00) 00:17:57.265 [2024-11-19 16:13:03.920211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.265 [2024-11-19 16:13:03.920231] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe740, cid 5, qid 0 00:17:57.265 [2024-11-19 16:13:03.920266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe5c0, cid 4, qid 0 00:17:57.265 [2024-11-19 16:13:03.920272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe8c0, cid 6, qid 0 00:17:57.265 [2024-11-19 16:13:03.920277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afea40, cid 7, qid 0 00:17:57.265 [2024-11-19 16:13:03.920620] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.265 [2024-11-19 16:13:03.920633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.265 [2024-11-19 16:13:03.920638] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920656] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab7b00): datao=0, datal=8192, cccid=5 00:17:57.265 [2024-11-19 16:13:03.920661] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe740) on tqpair(0x1ab7b00): expected_datao=0, payload_size=8192 00:17:57.265 [2024-11-19 16:13:03.920666] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920683] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920688] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920693] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.265 [2024-11-19 16:13:03.920699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.265 [2024-11-19 16:13:03.920703] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920707] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab7b00): datao=0, datal=512, cccid=4 00:17:57.265 [2024-11-19 16:13:03.920711] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe5c0) on tqpair(0x1ab7b00): expected_datao=0, payload_size=512 00:17:57.265 [2024-11-19 16:13:03.920716] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920722] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920725] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.265 [2024-11-19 16:13:03.920731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.265 [2024-11-19 16:13:03.920737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.266 [2024-11-19 16:13:03.920740] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.266 [2024-11-19 16:13:03.920744] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab7b00): datao=0, datal=512, cccid=6 00:17:57.266 [2024-11-19 16:13:03.920748] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afe8c0) on tqpair(0x1ab7b00): expected_datao=0, payload_size=512 00:17:57.266 [2024-11-19 16:13:03.920752] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.266 [2024-11-19 16:13:03.920758] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.266 [2024-11-19 16:13:03.920762] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.266 [2024-11-19 16:13:03.920767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.266 [2024-11-19 16:13:03.920773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.266 [2024-11-19 16:13:03.920777] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.266 [2024-11-19 16:13:03.920780] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ab7b00): datao=0, datal=4096, cccid=7 00:17:57.266 [2024-11-19 16:13:03.920785] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1afea40) on tqpair(0x1ab7b00): expected_datao=0, payload_size=4096 00:17:57.266 [2024-11-19 16:13:03.920789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.266 [2024-11-19 16:13:03.920795] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.266 [2024-11-19 16:13:03.920799] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.266 [2024-11-19 16:13:03.920807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.266 [2024-11-19 16:13:03.920813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.266 [2024-11-19 16:13:03.920816] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.266 ===================================================== 00:17:57.266 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:57.266 ===================================================== 00:17:57.266 Controller Capabilities/Features 00:17:57.266 ================================ 00:17:57.266 Vendor ID: 8086 00:17:57.266 Subsystem Vendor ID: 8086 00:17:57.266 Serial Number: SPDK00000000000001 00:17:57.266 Model Number: SPDK bdev Controller 00:17:57.266 Firmware Version: 25.01 00:17:57.266 Recommended Arb Burst: 6 00:17:57.266 IEEE OUI Identifier: e4 d2 5c 00:17:57.266 Multi-path I/O 00:17:57.266 May have multiple subsystem ports: Yes 00:17:57.266 May have multiple controllers: Yes 00:17:57.266 Associated with SR-IOV VF: No 00:17:57.266 Max Data Transfer Size: 131072 00:17:57.266 Max Number of Namespaces: 32 00:17:57.266 Max Number of I/O Queues: 127 00:17:57.266 NVMe Specification Version (VS): 1.3 00:17:57.266 NVMe Specification Version (Identify): 1.3 00:17:57.266 Maximum Queue Entries: 128 00:17:57.266 Contiguous Queues Required: Yes 00:17:57.266 Arbitration Mechanisms Supported 00:17:57.266 Weighted Round Robin: Not Supported 00:17:57.266 Vendor Specific: Not Supported 00:17:57.266 Reset Timeout: 15000 ms 00:17:57.266 Doorbell Stride: 4 bytes 00:17:57.266 NVM Subsystem Reset: Not Supported 00:17:57.266 Command Sets Supported 00:17:57.266 NVM Command Set: Supported 00:17:57.266 Boot Partition: Not Supported 00:17:57.266 Memory Page Size Minimum: 4096 bytes 00:17:57.266 Memory Page Size Maximum: 4096 bytes 00:17:57.266 Persistent Memory Region: Not Supported 00:17:57.266 Optional Asynchronous Events Supported 00:17:57.266 Namespace Attribute Notices: Supported 00:17:57.266 Firmware Activation Notices: Not Supported 00:17:57.266 ANA Change Notices: Not Supported 00:17:57.266 PLE Aggregate Log Change Notices: Not Supported 00:17:57.266 LBA Status Info Alert Notices: Not Supported 00:17:57.266 EGE Aggregate Log Change Notices: Not Supported 00:17:57.266 Normal NVM Subsystem Shutdown event: Not Supported 00:17:57.266 Zone Descriptor Change Notices: Not Supported 00:17:57.266 Discovery Log Change Notices: Not Supported 00:17:57.266 Controller Attributes 00:17:57.266 128-bit Host Identifier: Supported 00:17:57.266 Non-Operational Permissive Mode: Not Supported 00:17:57.266 NVM Sets: Not Supported 00:17:57.266 Read Recovery Levels: Not Supported 00:17:57.266 Endurance Groups: Not Supported 00:17:57.266 Predictable Latency Mode: Not Supported 00:17:57.266 Traffic Based Keep ALive: Not Supported 00:17:57.266 Namespace Granularity: Not Supported 00:17:57.266 SQ Associations: Not Supported 00:17:57.266 UUID List: Not Supported 00:17:57.266 Multi-Domain Subsystem: Not Supported 00:17:57.266 Fixed Capacity Management: Not Supported 00:17:57.266 Variable Capacity Management: Not Supported 00:17:57.266 Delete Endurance Group: Not Supported 00:17:57.266 Delete NVM Set: Not Supported 00:17:57.266 Extended LBA Formats Supported: Not Supported 00:17:57.266 Flexible Data Placement Supported: Not Supported 00:17:57.266 00:17:57.266 Controller Memory Buffer Support 00:17:57.266 ================================ 00:17:57.266 Supported: No 00:17:57.266 00:17:57.266 Persistent Memory Region Support 00:17:57.266 ================================ 00:17:57.266 Supported: No 00:17:57.266 00:17:57.266 Admin Command Set Attributes 00:17:57.266 ============================ 00:17:57.266 Security Send/Receive: Not Supported 00:17:57.266 Format NVM: Not Supported 00:17:57.266 Firmware Activate/Download: Not Supported 00:17:57.266 Namespace Management: Not Supported 00:17:57.266 Device Self-Test: Not Supported 00:17:57.266 Directives: Not Supported 00:17:57.266 NVMe-MI: Not Supported 00:17:57.266 Virtualization Management: Not Supported 00:17:57.266 Doorbell Buffer Config: Not Supported 00:17:57.266 Get LBA Status Capability: Not Supported 00:17:57.266 Command & Feature Lockdown Capability: Not Supported 00:17:57.266 Abort Command Limit: 4 00:17:57.266 Async Event Request Limit: 4 00:17:57.266 Number of Firmware Slots: N/A 00:17:57.266 Firmware Slot 1 Read-Only: N/A 00:17:57.266 Firmware Activation Without Reset: [2024-11-19 16:13:03.920820] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe740) on tqpair=0x1ab7b00 00:17:57.266 [2024-11-19 16:13:03.920836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.266 [2024-11-19 16:13:03.920842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.266 [2024-11-19 16:13:03.920846] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.266 [2024-11-19 16:13:03.920850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe5c0) on tqpair=0x1ab7b00 00:17:57.266 [2024-11-19 16:13:03.920861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.266 [2024-11-19 16:13:03.920867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.266 [2024-11-19 16:13:03.920871] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.266 [2024-11-19 16:13:03.920875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe8c0) on tqpair=0x1ab7b00 00:17:57.266 [2024-11-19 16:13:03.920882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.266 [2024-11-19 16:13:03.920887] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.266 [2024-11-19 16:13:03.920891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.266 [2024-11-19 16:13:03.920895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afea40) on tqpair=0x1ab7b00 00:17:57.266 N/A 00:17:57.266 Multiple Update Detection Support: N/A 00:17:57.266 Firmware Update Granularity: No Information Provided 00:17:57.266 Per-Namespace SMART Log: No 00:17:57.266 Asymmetric Namespace Access Log Page: Not Supported 00:17:57.266 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:57.266 Command Effects Log Page: Supported 00:17:57.266 Get Log Page Extended Data: Supported 00:17:57.266 Telemetry Log Pages: Not Supported 00:17:57.266 Persistent Event Log Pages: Not Supported 00:17:57.266 Supported Log Pages Log Page: May Support 00:17:57.266 Commands Supported & Effects Log Page: Not Supported 00:17:57.266 Feature Identifiers & Effects Log Page:May Support 00:17:57.266 NVMe-MI Commands & Effects Log Page: May Support 00:17:57.266 Data Area 4 for Telemetry Log: Not Supported 00:17:57.266 Error Log Page Entries Supported: 128 00:17:57.266 Keep Alive: Supported 00:17:57.266 Keep Alive Granularity: 10000 ms 00:17:57.266 00:17:57.266 NVM Command Set Attributes 00:17:57.266 ========================== 00:17:57.266 Submission Queue Entry Size 00:17:57.266 Max: 64 00:17:57.266 Min: 64 00:17:57.266 Completion Queue Entry Size 00:17:57.266 Max: 16 00:17:57.266 Min: 16 00:17:57.266 Number of Namespaces: 32 00:17:57.266 Compare Command: Supported 00:17:57.266 Write Uncorrectable Command: Not Supported 00:17:57.266 Dataset Management Command: Supported 00:17:57.266 Write Zeroes Command: Supported 00:17:57.266 Set Features Save Field: Not Supported 00:17:57.266 Reservations: Supported 00:17:57.266 Timestamp: Not Supported 00:17:57.266 Copy: Supported 00:17:57.266 Volatile Write Cache: Present 00:17:57.266 Atomic Write Unit (Normal): 1 00:17:57.266 Atomic Write Unit (PFail): 1 00:17:57.266 Atomic Compare & Write Unit: 1 00:17:57.266 Fused Compare & Write: Supported 00:17:57.266 Scatter-Gather List 00:17:57.266 SGL Command Set: Supported 00:17:57.266 SGL Keyed: Supported 00:17:57.266 SGL Bit Bucket Descriptor: Not Supported 00:17:57.266 SGL Metadata Pointer: Not Supported 00:17:57.266 Oversized SGL: Not Supported 00:17:57.266 SGL Metadata Address: Not Supported 00:17:57.266 SGL Offset: Supported 00:17:57.266 Transport SGL Data Block: Not Supported 00:17:57.266 Replay Protected Memory Block: Not Supported 00:17:57.266 00:17:57.266 Firmware Slot Information 00:17:57.266 ========================= 00:17:57.266 Active slot: 1 00:17:57.267 Slot 1 Firmware Revision: 25.01 00:17:57.267 00:17:57.267 00:17:57.267 Commands Supported and Effects 00:17:57.267 ============================== 00:17:57.267 Admin Commands 00:17:57.267 -------------- 00:17:57.267 Get Log Page (02h): Supported 00:17:57.267 Identify (06h): Supported 00:17:57.267 Abort (08h): Supported 00:17:57.267 Set Features (09h): Supported 00:17:57.267 Get Features (0Ah): Supported 00:17:57.267 Asynchronous Event Request (0Ch): Supported 00:17:57.267 Keep Alive (18h): Supported 00:17:57.267 I/O Commands 00:17:57.267 ------------ 00:17:57.267 Flush (00h): Supported LBA-Change 00:17:57.267 Write (01h): Supported LBA-Change 00:17:57.267 Read (02h): Supported 00:17:57.267 Compare (05h): Supported 00:17:57.267 Write Zeroes (08h): Supported LBA-Change 00:17:57.267 Dataset Management (09h): Supported LBA-Change 00:17:57.267 Copy (19h): Supported LBA-Change 00:17:57.267 00:17:57.267 Error Log 00:17:57.267 ========= 00:17:57.267 00:17:57.267 Arbitration 00:17:57.267 =========== 00:17:57.267 Arbitration Burst: 1 00:17:57.267 00:17:57.267 Power Management 00:17:57.267 ================ 00:17:57.267 Number of Power States: 1 00:17:57.267 Current Power State: Power State #0 00:17:57.267 Power State #0: 00:17:57.267 Max Power: 0.00 W 00:17:57.267 Non-Operational State: Operational 00:17:57.267 Entry Latency: Not Reported 00:17:57.267 Exit Latency: Not Reported 00:17:57.267 Relative Read Throughput: 0 00:17:57.267 Relative Read Latency: 0 00:17:57.267 Relative Write Throughput: 0 00:17:57.267 Relative Write Latency: 0 00:17:57.267 Idle Power: Not Reported 00:17:57.267 Active Power: Not Reported 00:17:57.267 Non-Operational Permissive Mode: Not Supported 00:17:57.267 00:17:57.267 Health Information 00:17:57.267 ================== 00:17:57.267 Critical Warnings: 00:17:57.267 Available Spare Space: OK 00:17:57.267 Temperature: OK 00:17:57.267 Device Reliability: OK 00:17:57.267 Read Only: No 00:17:57.267 Volatile Memory Backup: OK 00:17:57.267 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:57.267 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:57.267 Available Spare: 0% 00:17:57.267 Available Spare Threshold: 0% 00:17:57.267 Life Percentage Used:[2024-11-19 16:13:03.920998] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.921006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1ab7b00) 00:17:57.267 [2024-11-19 16:13:03.921013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.267 [2024-11-19 16:13:03.921038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afea40, cid 7, qid 0 00:17:57.267 [2024-11-19 16:13:03.925286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.267 [2024-11-19 16:13:03.925308] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.267 [2024-11-19 16:13:03.925313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.925318] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afea40) on tqpair=0x1ab7b00 00:17:57.267 [2024-11-19 16:13:03.925359] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:17:57.267 [2024-11-19 16:13:03.925372] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afdfc0) on tqpair=0x1ab7b00 00:17:57.267 [2024-11-19 16:13:03.925379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.267 [2024-11-19 16:13:03.925385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe140) on tqpair=0x1ab7b00 00:17:57.267 [2024-11-19 16:13:03.925390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.267 [2024-11-19 16:13:03.925395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe2c0) on tqpair=0x1ab7b00 00:17:57.267 [2024-11-19 16:13:03.925399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.267 [2024-11-19 16:13:03.925405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.267 [2024-11-19 16:13:03.925409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.267 [2024-11-19 16:13:03.925419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.925423] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.925443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.267 [2024-11-19 16:13:03.925468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.267 [2024-11-19 16:13:03.925496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.267 [2024-11-19 16:13:03.925815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.267 [2024-11-19 16:13:03.925830] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.267 [2024-11-19 16:13:03.925835] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.925839] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.267 [2024-11-19 16:13:03.925847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.925852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.925856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.267 [2024-11-19 16:13:03.925864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.267 [2024-11-19 16:13:03.925887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.267 [2024-11-19 16:13:03.926174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.267 [2024-11-19 16:13:03.926188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.267 [2024-11-19 16:13:03.926193] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.926197] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.267 [2024-11-19 16:13:03.926203] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:17:57.267 [2024-11-19 16:13:03.926208] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:17:57.267 [2024-11-19 16:13:03.926219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.926224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.926228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.267 [2024-11-19 16:13:03.926248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.267 [2024-11-19 16:13:03.926286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.267 [2024-11-19 16:13:03.926450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.267 [2024-11-19 16:13:03.926458] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.267 [2024-11-19 16:13:03.926477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.926481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.267 [2024-11-19 16:13:03.926492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.926497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.926500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.267 [2024-11-19 16:13:03.926508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.267 [2024-11-19 16:13:03.926526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.267 [2024-11-19 16:13:03.926605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.267 [2024-11-19 16:13:03.926612] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.267 [2024-11-19 16:13:03.926615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.926619] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.267 [2024-11-19 16:13:03.926630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.267 [2024-11-19 16:13:03.926634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.926638] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.926645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.926662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.926973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.926989] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.926994] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.926999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.927010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.927028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.927049] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.927105] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.927112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.927116] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927121] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.927132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.927149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.927168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.927394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.927403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.927407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.927422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927426] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.927438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.927456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.927725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.927738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.927743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.927758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.927774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.927792] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.927835] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.927842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.927845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.927859] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.927867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.927875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.927892] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.928010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.928017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.928020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.928034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928039] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.928050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.928067] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.928384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.928398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.928402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.928417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928426] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.928433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.928453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.928499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.928505] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.928509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.928523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928527] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928531] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.928538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.928555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.928808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.928821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.928825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.928840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.928856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.928874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.928929] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.928936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.928939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.928953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.928961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.928968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.928986] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.933252] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.933273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.933278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.933283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.933297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.933302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.933306] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ab7b00) 00:17:57.268 [2024-11-19 16:13:03.933314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.268 [2024-11-19 16:13:03.933339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1afe440, cid 3, qid 0 00:17:57.268 [2024-11-19 16:13:03.933390] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.268 [2024-11-19 16:13:03.933396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.268 [2024-11-19 16:13:03.933400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.268 [2024-11-19 16:13:03.933404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1afe440) on tqpair=0x1ab7b00 00:17:57.268 [2024-11-19 16:13:03.933412] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:17:57.269 0% 00:17:57.269 Data Units Read: 0 00:17:57.269 Data Units Written: 0 00:17:57.269 Host Read Commands: 0 00:17:57.269 Host Write Commands: 0 00:17:57.269 Controller Busy Time: 0 minutes 00:17:57.269 Power Cycles: 0 00:17:57.269 Power On Hours: 0 hours 00:17:57.269 Unsafe Shutdowns: 0 00:17:57.269 Unrecoverable Media Errors: 0 00:17:57.269 Lifetime Error Log Entries: 0 00:17:57.269 Warning Temperature Time: 0 minutes 00:17:57.269 Critical Temperature Time: 0 minutes 00:17:57.269 00:17:57.269 Number of Queues 00:17:57.269 ================ 00:17:57.269 Number of I/O Submission Queues: 127 00:17:57.269 Number of I/O Completion Queues: 127 00:17:57.269 00:17:57.269 Active Namespaces 00:17:57.269 ================= 00:17:57.269 Namespace ID:1 00:17:57.269 Error Recovery Timeout: Unlimited 00:17:57.269 Command Set Identifier: NVM (00h) 00:17:57.269 Deallocate: Supported 00:17:57.269 Deallocated/Unwritten Error: Not Supported 00:17:57.269 Deallocated Read Value: Unknown 00:17:57.269 Deallocate in Write Zeroes: Not Supported 00:17:57.269 Deallocated Guard Field: 0xFFFF 00:17:57.269 Flush: Supported 00:17:57.269 Reservation: Supported 00:17:57.269 Namespace Sharing Capabilities: Multiple Controllers 00:17:57.269 Size (in LBAs): 131072 (0GiB) 00:17:57.269 Capacity (in LBAs): 131072 (0GiB) 00:17:57.269 Utilization (in LBAs): 131072 (0GiB) 00:17:57.269 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:57.269 EUI64: ABCDEF0123456789 00:17:57.269 UUID: f448fcaa-98dd-4cf2-a713-168ddc8d147e 00:17:57.269 Thin Provisioning: Not Supported 00:17:57.269 Per-NS Atomic Units: Yes 00:17:57.269 Atomic Boundary Size (Normal): 0 00:17:57.269 Atomic Boundary Size (PFail): 0 00:17:57.269 Atomic Boundary Offset: 0 00:17:57.269 Maximum Single Source Range Length: 65535 00:17:57.269 Maximum Copy Length: 65535 00:17:57.269 Maximum Source Range Count: 1 00:17:57.269 NGUID/EUI64 Never Reused: No 00:17:57.269 Namespace Write Protected: No 00:17:57.269 Number of LBA Formats: 1 00:17:57.269 Current LBA Format: LBA Format #00 00:17:57.269 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:57.269 00:17:57.269 16:13:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.528 rmmod nvme_tcp 00:17:57.528 rmmod nvme_fabrics 00:17:57.528 rmmod nvme_keyring 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 89154 ']' 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 89154 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 89154 ']' 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 89154 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89154 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:57.528 killing process with pid 89154 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89154' 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 89154 00:17:57.528 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 89154 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.786 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:58.045 00:17:58.045 real 0m2.232s 00:17:58.045 user 0m4.520s 00:17:58.045 sys 0m0.739s 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:58.045 ************************************ 00:17:58.045 END TEST nvmf_identify 00:17:58.045 ************************************ 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.045 ************************************ 00:17:58.045 START TEST nvmf_perf 00:17:58.045 ************************************ 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:58.045 * Looking for test storage... 00:17:58.045 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.045 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:58.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.305 --rc genhtml_branch_coverage=1 00:17:58.305 --rc genhtml_function_coverage=1 00:17:58.305 --rc genhtml_legend=1 00:17:58.305 --rc geninfo_all_blocks=1 00:17:58.305 --rc geninfo_unexecuted_blocks=1 00:17:58.305 00:17:58.305 ' 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:58.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.305 --rc genhtml_branch_coverage=1 00:17:58.305 --rc genhtml_function_coverage=1 00:17:58.305 --rc genhtml_legend=1 00:17:58.305 --rc geninfo_all_blocks=1 00:17:58.305 --rc geninfo_unexecuted_blocks=1 00:17:58.305 00:17:58.305 ' 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:58.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.305 --rc genhtml_branch_coverage=1 00:17:58.305 --rc genhtml_function_coverage=1 00:17:58.305 --rc genhtml_legend=1 00:17:58.305 --rc geninfo_all_blocks=1 00:17:58.305 --rc geninfo_unexecuted_blocks=1 00:17:58.305 00:17:58.305 ' 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:58.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.305 --rc genhtml_branch_coverage=1 00:17:58.305 --rc genhtml_function_coverage=1 00:17:58.305 --rc genhtml_legend=1 00:17:58.305 --rc geninfo_all_blocks=1 00:17:58.305 --rc geninfo_unexecuted_blocks=1 00:17:58.305 00:17:58.305 ' 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.305 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:58.305 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:58.306 Cannot find device "nvmf_init_br" 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:58.306 Cannot find device "nvmf_init_br2" 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:58.306 Cannot find device "nvmf_tgt_br" 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.306 Cannot find device "nvmf_tgt_br2" 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:58.306 Cannot find device "nvmf_init_br" 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:58.306 Cannot find device "nvmf_init_br2" 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:58.306 Cannot find device "nvmf_tgt_br" 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:58.306 Cannot find device "nvmf_tgt_br2" 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:58.306 Cannot find device "nvmf_br" 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:58.306 Cannot find device "nvmf_init_if" 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:58.306 Cannot find device "nvmf_init_if2" 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:58.306 16:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:58.565 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:58.565 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:17:58.565 00:17:58.565 --- 10.0.0.3 ping statistics --- 00:17:58.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.565 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:58.565 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:58.565 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:17:58.565 00:17:58.565 --- 10.0.0.4 ping statistics --- 00:17:58.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.565 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:58.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:58.565 00:17:58.565 --- 10.0.0.1 ping statistics --- 00:17:58.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.565 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:58.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:58.565 00:17:58.565 --- 10.0.0.2 ping statistics --- 00:17:58.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.565 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:58.565 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=89403 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 89403 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 89403 ']' 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.566 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:58.825 [2024-11-19 16:13:05.309382] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:17:58.825 [2024-11-19 16:13:05.309521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.825 [2024-11-19 16:13:05.456290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.825 [2024-11-19 16:13:05.480157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.825 [2024-11-19 16:13:05.480210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.825 [2024-11-19 16:13:05.480222] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.825 [2024-11-19 16:13:05.480230] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.825 [2024-11-19 16:13:05.480251] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.825 [2024-11-19 16:13:05.481125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.825 [2024-11-19 16:13:05.481210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.825 [2024-11-19 16:13:05.481331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.825 [2024-11-19 16:13:05.481331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.825 [2024-11-19 16:13:05.514416] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:59.084 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.084 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:17:59.084 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.084 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.084 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:59.084 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.084 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:59.084 16:13:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:59.650 16:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:59.650 16:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:59.909 16:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:59.909 16:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.167 16:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:00.167 16:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:00.167 16:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:00.167 16:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:00.167 16:13:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.425 [2024-11-19 16:13:06.995839] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.425 16:13:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:00.683 16:13:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:00.683 16:13:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.941 16:13:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:00.941 16:13:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:01.199 16:13:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:01.458 [2024-11-19 16:13:08.069394] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:01.458 16:13:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:01.716 16:13:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:01.716 16:13:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:01.716 16:13:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:01.716 16:13:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:03.090 Initializing NVMe Controllers 00:18:03.090 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:03.090 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:03.090 Initialization complete. Launching workers. 00:18:03.090 ======================================================== 00:18:03.090 Latency(us) 00:18:03.090 Device Information : IOPS MiB/s Average min max 00:18:03.090 PCIE (0000:00:10.0) NSID 1 from core 0: 22540.56 88.05 1419.17 310.02 8098.11 00:18:03.090 ======================================================== 00:18:03.090 Total : 22540.56 88.05 1419.17 310.02 8098.11 00:18:03.090 00:18:03.090 16:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:04.468 Initializing NVMe Controllers 00:18:04.468 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:04.468 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:04.468 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:04.468 Initialization complete. Launching workers. 00:18:04.468 ======================================================== 00:18:04.468 Latency(us) 00:18:04.468 Device Information : IOPS MiB/s Average min max 00:18:04.468 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3680.90 14.38 271.30 101.67 7207.41 00:18:04.468 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.00 0.48 8261.50 6952.85 12032.64 00:18:04.468 ======================================================== 00:18:04.468 Total : 3802.90 14.86 527.63 101.67 12032.64 00:18:04.468 00:18:04.468 16:13:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:05.846 Initializing NVMe Controllers 00:18:05.846 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.846 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:05.846 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:05.846 Initialization complete. Launching workers. 00:18:05.846 ======================================================== 00:18:05.846 Latency(us) 00:18:05.846 Device Information : IOPS MiB/s Average min max 00:18:05.846 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8795.40 34.36 3638.65 544.12 7883.46 00:18:05.846 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4023.34 15.72 7996.55 5892.12 16079.84 00:18:05.846 ======================================================== 00:18:05.846 Total : 12818.74 50.07 5006.44 544.12 16079.84 00:18:05.846 00:18:05.846 16:13:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:05.846 16:13:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:08.378 Initializing NVMe Controllers 00:18:08.378 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.378 Controller IO queue size 128, less than required. 00:18:08.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.378 Controller IO queue size 128, less than required. 00:18:08.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.378 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:08.378 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:08.378 Initialization complete. Launching workers. 00:18:08.378 ======================================================== 00:18:08.378 Latency(us) 00:18:08.378 Device Information : IOPS MiB/s Average min max 00:18:08.378 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1867.62 466.91 69520.09 36552.64 142256.54 00:18:08.378 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 656.78 164.20 201151.61 46931.36 317436.03 00:18:08.378 ======================================================== 00:18:08.378 Total : 2524.40 631.10 103767.17 36552.64 317436.03 00:18:08.378 00:18:08.378 16:13:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:08.378 Initializing NVMe Controllers 00:18:08.378 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.378 Controller IO queue size 128, less than required. 00:18:08.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.378 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:08.378 Controller IO queue size 128, less than required. 00:18:08.378 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.378 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:08.378 WARNING: Some requested NVMe devices were skipped 00:18:08.378 No valid NVMe controllers or AIO or URING devices found 00:18:08.635 16:13:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:11.167 Initializing NVMe Controllers 00:18:11.167 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:11.167 Controller IO queue size 128, less than required. 00:18:11.167 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.167 Controller IO queue size 128, less than required. 00:18:11.167 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:11.167 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:11.167 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:11.167 Initialization complete. Launching workers. 00:18:11.167 00:18:11.167 ==================== 00:18:11.167 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:11.167 TCP transport: 00:18:11.167 polls: 11003 00:18:11.167 idle_polls: 6349 00:18:11.167 sock_completions: 4654 00:18:11.167 nvme_completions: 6573 00:18:11.167 submitted_requests: 9886 00:18:11.167 queued_requests: 1 00:18:11.167 00:18:11.167 ==================== 00:18:11.167 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:11.167 TCP transport: 00:18:11.167 polls: 11024 00:18:11.167 idle_polls: 6159 00:18:11.167 sock_completions: 4865 00:18:11.167 nvme_completions: 6907 00:18:11.167 submitted_requests: 10398 00:18:11.167 queued_requests: 1 00:18:11.167 ======================================================== 00:18:11.167 Latency(us) 00:18:11.167 Device Information : IOPS MiB/s Average min max 00:18:11.167 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1642.93 410.73 79190.67 38540.56 122458.74 00:18:11.167 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1726.42 431.61 74453.25 33511.76 120423.05 00:18:11.167 ======================================================== 00:18:11.167 Total : 3369.35 842.34 76763.26 33511.76 122458.74 00:18:11.167 00:18:11.167 16:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:11.167 16:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:11.426 16:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:11.426 16:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:11.426 16:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:11.684 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=ed7aa586-6ade-4166-906c-857abd1b26ca 00:18:11.684 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb ed7aa586-6ade-4166-906c-857abd1b26ca 00:18:11.684 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=ed7aa586-6ade-4166-906c-857abd1b26ca 00:18:11.684 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:11.684 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:18:11.684 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:18:11.684 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:11.943 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:11.943 { 00:18:11.943 "uuid": "ed7aa586-6ade-4166-906c-857abd1b26ca", 00:18:11.943 "name": "lvs_0", 00:18:11.943 "base_bdev": "Nvme0n1", 00:18:11.943 "total_data_clusters": 1278, 00:18:11.943 "free_clusters": 1278, 00:18:11.943 "block_size": 4096, 00:18:11.943 "cluster_size": 4194304 00:18:11.943 } 00:18:11.943 ]' 00:18:11.943 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="ed7aa586-6ade-4166-906c-857abd1b26ca") .free_clusters' 00:18:11.943 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:18:11.943 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="ed7aa586-6ade-4166-906c-857abd1b26ca") .cluster_size' 00:18:11.943 5112 00:18:11.943 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:18:11.943 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:18:11.943 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:18:11.943 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:11.943 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u ed7aa586-6ade-4166-906c-857abd1b26ca lbd_0 5112 00:18:12.509 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=4e504fd0-4b50-4dd8-bf4e-38aae799e87b 00:18:12.509 16:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 4e504fd0-4b50-4dd8-bf4e-38aae799e87b lvs_n_0 00:18:12.767 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=0ca06f49-2af1-4390-bcdd-7052e54d882b 00:18:12.767 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 0ca06f49-2af1-4390-bcdd-7052e54d882b 00:18:12.767 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=0ca06f49-2af1-4390-bcdd-7052e54d882b 00:18:12.767 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:12.767 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:18:12.767 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:18:12.767 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:13.024 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:13.024 { 00:18:13.024 "uuid": "ed7aa586-6ade-4166-906c-857abd1b26ca", 00:18:13.024 "name": "lvs_0", 00:18:13.024 "base_bdev": "Nvme0n1", 00:18:13.024 "total_data_clusters": 1278, 00:18:13.024 "free_clusters": 0, 00:18:13.024 "block_size": 4096, 00:18:13.024 "cluster_size": 4194304 00:18:13.024 }, 00:18:13.024 { 00:18:13.024 "uuid": "0ca06f49-2af1-4390-bcdd-7052e54d882b", 00:18:13.024 "name": "lvs_n_0", 00:18:13.024 "base_bdev": "4e504fd0-4b50-4dd8-bf4e-38aae799e87b", 00:18:13.024 "total_data_clusters": 1276, 00:18:13.024 "free_clusters": 1276, 00:18:13.024 "block_size": 4096, 00:18:13.024 "cluster_size": 4194304 00:18:13.024 } 00:18:13.024 ]' 00:18:13.024 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="0ca06f49-2af1-4390-bcdd-7052e54d882b") .free_clusters' 00:18:13.024 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:18:13.024 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="0ca06f49-2af1-4390-bcdd-7052e54d882b") .cluster_size' 00:18:13.024 5104 00:18:13.024 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:18:13.024 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:18:13.024 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:18:13.024 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:13.024 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0ca06f49-2af1-4390-bcdd-7052e54d882b lbd_nest_0 5104 00:18:13.282 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=a6b627ec-2bf1-4f12-88d6-5d0840de9e24 00:18:13.282 16:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.540 16:13:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:13.540 16:13:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 a6b627ec-2bf1-4f12-88d6-5d0840de9e24 00:18:13.798 16:13:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:14.056 16:13:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:14.056 16:13:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:14.056 16:13:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:14.056 16:13:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:14.056 16:13:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:14.314 Initializing NVMe Controllers 00:18:14.314 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.314 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:14.314 WARNING: Some requested NVMe devices were skipped 00:18:14.314 No valid NVMe controllers or AIO or URING devices found 00:18:14.314 16:13:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:14.314 16:13:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:26.518 Initializing NVMe Controllers 00:18:26.518 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:26.518 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:26.518 Initialization complete. Launching workers. 00:18:26.518 ======================================================== 00:18:26.518 Latency(us) 00:18:26.518 Device Information : IOPS MiB/s Average min max 00:18:26.518 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 947.90 118.49 1054.00 334.96 8250.57 00:18:26.518 ======================================================== 00:18:26.518 Total : 947.90 118.49 1054.00 334.96 8250.57 00:18:26.518 00:18:26.518 16:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:26.518 16:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:26.518 16:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:26.518 Initializing NVMe Controllers 00:18:26.518 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:26.518 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:26.518 WARNING: Some requested NVMe devices were skipped 00:18:26.518 No valid NVMe controllers or AIO or URING devices found 00:18:26.518 16:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:26.518 16:13:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:36.492 Initializing NVMe Controllers 00:18:36.492 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.492 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:36.492 Initialization complete. Launching workers. 00:18:36.492 ======================================================== 00:18:36.492 Latency(us) 00:18:36.492 Device Information : IOPS MiB/s Average min max 00:18:36.492 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1343.86 167.98 23833.50 7167.87 67618.26 00:18:36.492 ======================================================== 00:18:36.492 Total : 1343.86 167.98 23833.50 7167.87 67618.26 00:18:36.492 00:18:36.492 16:13:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:36.492 16:13:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:36.492 16:13:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:36.492 Initializing NVMe Controllers 00:18:36.492 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.492 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:36.492 WARNING: Some requested NVMe devices were skipped 00:18:36.492 No valid NVMe controllers or AIO or URING devices found 00:18:36.492 16:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:36.492 16:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:46.470 Initializing NVMe Controllers 00:18:46.470 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:46.470 Controller IO queue size 128, less than required. 00:18:46.470 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:46.470 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:46.470 Initialization complete. Launching workers. 00:18:46.470 ======================================================== 00:18:46.470 Latency(us) 00:18:46.470 Device Information : IOPS MiB/s Average min max 00:18:46.470 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4099.51 512.44 31266.13 11882.96 62252.65 00:18:46.470 ======================================================== 00:18:46.470 Total : 4099.51 512.44 31266.13 11882.96 62252.65 00:18:46.470 00:18:46.470 16:13:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.470 16:13:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a6b627ec-2bf1-4f12-88d6-5d0840de9e24 00:18:46.728 16:13:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:46.986 16:13:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4e504fd0-4b50-4dd8-bf4e-38aae799e87b 00:18:47.244 16:13:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:47.503 rmmod nvme_tcp 00:18:47.503 rmmod nvme_fabrics 00:18:47.503 rmmod nvme_keyring 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 89403 ']' 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 89403 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 89403 ']' 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 89403 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89403 00:18:47.503 killing process with pid 89403 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89403' 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 89403 00:18:47.503 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 89403 00:18:48.070 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:48.070 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:48.070 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:48.070 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:48.070 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:18:48.070 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:18:48.070 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:48.071 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:48.071 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:48.071 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:48.071 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:48.071 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:48.071 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.071 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:48.071 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:48.071 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:48.071 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:48.071 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:48.330 00:18:48.330 real 0m50.372s 00:18:48.330 user 3m9.315s 00:18:48.330 sys 0m11.937s 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:48.330 ************************************ 00:18:48.330 END TEST nvmf_perf 00:18:48.330 ************************************ 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.330 16:13:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:48.330 ************************************ 00:18:48.330 START TEST nvmf_fio_host 00:18:48.330 ************************************ 00:18:48.330 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:48.590 * Looking for test storage... 00:18:48.590 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.590 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:48.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.591 --rc genhtml_branch_coverage=1 00:18:48.591 --rc genhtml_function_coverage=1 00:18:48.591 --rc genhtml_legend=1 00:18:48.591 --rc geninfo_all_blocks=1 00:18:48.591 --rc geninfo_unexecuted_blocks=1 00:18:48.591 00:18:48.591 ' 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:48.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.591 --rc genhtml_branch_coverage=1 00:18:48.591 --rc genhtml_function_coverage=1 00:18:48.591 --rc genhtml_legend=1 00:18:48.591 --rc geninfo_all_blocks=1 00:18:48.591 --rc geninfo_unexecuted_blocks=1 00:18:48.591 00:18:48.591 ' 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:48.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.591 --rc genhtml_branch_coverage=1 00:18:48.591 --rc genhtml_function_coverage=1 00:18:48.591 --rc genhtml_legend=1 00:18:48.591 --rc geninfo_all_blocks=1 00:18:48.591 --rc geninfo_unexecuted_blocks=1 00:18:48.591 00:18:48.591 ' 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:48.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.591 --rc genhtml_branch_coverage=1 00:18:48.591 --rc genhtml_function_coverage=1 00:18:48.591 --rc genhtml_legend=1 00:18:48.591 --rc geninfo_all_blocks=1 00:18:48.591 --rc geninfo_unexecuted_blocks=1 00:18:48.591 00:18:48.591 ' 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:48.591 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.591 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:48.592 Cannot find device "nvmf_init_br" 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:48.592 Cannot find device "nvmf_init_br2" 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:48.592 Cannot find device "nvmf_tgt_br" 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.592 Cannot find device "nvmf_tgt_br2" 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:48.592 Cannot find device "nvmf_init_br" 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:48.592 Cannot find device "nvmf_init_br2" 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:48.592 Cannot find device "nvmf_tgt_br" 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:48.592 Cannot find device "nvmf_tgt_br2" 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:48.592 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:48.851 Cannot find device "nvmf_br" 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:48.851 Cannot find device "nvmf_init_if" 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:48.851 Cannot find device "nvmf_init_if2" 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.851 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:48.851 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:48.852 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:48.852 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:18:48.852 00:18:48.852 --- 10.0.0.3 ping statistics --- 00:18:48.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.852 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:48.852 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:48.852 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:18:48.852 00:18:48.852 --- 10.0.0.4 ping statistics --- 00:18:48.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.852 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:48.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:48.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:48.852 00:18:48.852 --- 10.0.0.1 ping statistics --- 00:18:48.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.852 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:48.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:48.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:18:48.852 00:18:48.852 --- 10.0.0.2 ping statistics --- 00:18:48.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:48.852 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:48.852 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=90266 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 90266 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 90266 ']' 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.111 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.111 [2024-11-19 16:13:55.643457] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:18:49.111 [2024-11-19 16:13:55.643570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.111 [2024-11-19 16:13:55.796035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.111 [2024-11-19 16:13:55.820397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.111 [2024-11-19 16:13:55.820690] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.111 [2024-11-19 16:13:55.820809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.111 [2024-11-19 16:13:55.820898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.111 [2024-11-19 16:13:55.820992] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.111 [2024-11-19 16:13:55.822019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.111 [2024-11-19 16:13:55.822154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.111 [2024-11-19 16:13:55.822291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.111 [2024-11-19 16:13:55.822461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.371 [2024-11-19 16:13:55.856897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:49.371 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.371 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:18:49.371 16:13:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:49.629 [2024-11-19 16:13:56.136694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.629 16:13:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:49.629 16:13:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:49.630 16:13:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.630 16:13:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:49.888 Malloc1 00:18:49.888 16:13:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:50.146 16:13:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:50.405 16:13:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:50.664 [2024-11-19 16:13:57.218590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:50.664 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:50.922 16:13:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:51.181 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:51.181 fio-3.35 00:18:51.181 Starting 1 thread 00:18:53.714 00:18:53.714 test: (groupid=0, jobs=1): err= 0: pid=90336: Tue Nov 19 16:14:00 2024 00:18:53.714 read: IOPS=9106, BW=35.6MiB/s (37.3MB/s)(71.4MiB/2006msec) 00:18:53.714 slat (nsec): min=1909, max=344031, avg=2567.93, stdev=4123.73 00:18:53.714 clat (usec): min=2578, max=12616, avg=7292.97, stdev=557.43 00:18:53.714 lat (usec): min=2639, max=12618, avg=7295.54, stdev=557.30 00:18:53.714 clat percentiles (usec): 00:18:53.714 | 1.00th=[ 6194], 5.00th=[ 6521], 10.00th=[ 6652], 20.00th=[ 6849], 00:18:53.714 | 30.00th=[ 7046], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:18:53.714 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8160], 00:18:53.714 | 99.00th=[ 8586], 99.50th=[ 9372], 99.90th=[11338], 99.95th=[12125], 00:18:53.714 | 99.99th=[12649] 00:18:53.714 bw ( KiB/s): min=35752, max=37184, per=99.93%, avg=36398.00, stdev=601.77, samples=4 00:18:53.714 iops : min= 8938, max= 9296, avg=9099.50, stdev=150.44, samples=4 00:18:53.714 write: IOPS=9118, BW=35.6MiB/s (37.3MB/s)(71.4MiB/2006msec); 0 zone resets 00:18:53.714 slat (usec): min=2, max=267, avg= 2.66, stdev= 2.52 00:18:53.714 clat (usec): min=2430, max=11709, avg=6687.44, stdev=512.06 00:18:53.714 lat (usec): min=2445, max=11712, avg=6690.11, stdev=512.08 00:18:53.714 clat percentiles (usec): 00:18:53.714 | 1.00th=[ 5735], 5.00th=[ 5997], 10.00th=[ 6128], 20.00th=[ 6325], 00:18:53.714 | 30.00th=[ 6456], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6783], 00:18:53.714 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:18:53.714 | 99.00th=[ 7963], 99.50th=[ 9372], 99.90th=[10421], 99.95th=[11076], 00:18:53.714 | 99.99th=[11731] 00:18:53.714 bw ( KiB/s): min=36328, max=36552, per=99.98%, avg=36464.00, stdev=95.78, samples=4 00:18:53.714 iops : min= 9082, max= 9138, avg=9116.00, stdev=23.94, samples=4 00:18:53.714 lat (msec) : 4=0.07%, 10=99.60%, 20=0.33% 00:18:53.714 cpu : usr=66.08%, sys=24.39%, ctx=13, majf=0, minf=7 00:18:53.714 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:53.714 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:53.714 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:53.714 issued rwts: total=18267,18291,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:53.714 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:53.714 00:18:53.714 Run status group 0 (all jobs): 00:18:53.714 READ: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.4MiB (74.8MB), run=2006-2006msec 00:18:53.714 WRITE: bw=35.6MiB/s (37.3MB/s), 35.6MiB/s-35.6MiB/s (37.3MB/s-37.3MB/s), io=71.4MiB (74.9MB), run=2006-2006msec 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:53.714 16:14:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:53.714 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:53.714 fio-3.35 00:18:53.714 Starting 1 thread 00:18:56.289 00:18:56.289 test: (groupid=0, jobs=1): err= 0: pid=90385: Tue Nov 19 16:14:02 2024 00:18:56.289 read: IOPS=8349, BW=130MiB/s (137MB/s)(262MiB/2006msec) 00:18:56.289 slat (usec): min=2, max=108, avg= 3.80, stdev= 2.27 00:18:56.289 clat (usec): min=3064, max=17941, avg=8528.84, stdev=2509.21 00:18:56.289 lat (usec): min=3067, max=17945, avg=8532.64, stdev=2509.27 00:18:56.289 clat percentiles (usec): 00:18:56.289 | 1.00th=[ 4178], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 6259], 00:18:56.289 | 30.00th=[ 6980], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 8979], 00:18:56.289 | 70.00th=[ 9765], 80.00th=[10683], 90.00th=[11863], 95.00th=[13173], 00:18:56.289 | 99.00th=[15270], 99.50th=[15926], 99.90th=[16909], 99.95th=[17433], 00:18:56.289 | 99.99th=[17957] 00:18:56.289 bw ( KiB/s): min=63680, max=71872, per=50.45%, avg=67400.00, stdev=3868.02, samples=4 00:18:56.289 iops : min= 3980, max= 4492, avg=4212.50, stdev=241.75, samples=4 00:18:56.289 write: IOPS=4704, BW=73.5MiB/s (77.1MB/s)(137MiB/1870msec); 0 zone resets 00:18:56.289 slat (usec): min=31, max=333, avg=38.73, stdev= 9.88 00:18:56.289 clat (usec): min=3119, max=21413, avg=12238.42, stdev=2249.17 00:18:56.289 lat (usec): min=3158, max=21445, avg=12277.15, stdev=2250.63 00:18:56.289 clat percentiles (usec): 00:18:56.289 | 1.00th=[ 7832], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290], 00:18:56.289 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11994], 60.00th=[12518], 00:18:56.289 | 70.00th=[13304], 80.00th=[14222], 90.00th=[15270], 95.00th=[16057], 00:18:56.289 | 99.00th=[18482], 99.50th=[19006], 99.90th=[20579], 99.95th=[20841], 00:18:56.289 | 99.99th=[21365] 00:18:56.289 bw ( KiB/s): min=65376, max=73984, per=92.58%, avg=69680.00, stdev=4294.88, samples=4 00:18:56.289 iops : min= 4086, max= 4624, avg=4355.00, stdev=268.43, samples=4 00:18:56.289 lat (msec) : 4=0.44%, 10=52.15%, 20=47.34%, 50=0.07% 00:18:56.289 cpu : usr=80.45%, sys=14.26%, ctx=8, majf=0, minf=3 00:18:56.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:56.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:56.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:56.289 issued rwts: total=16749,8797,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:56.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:56.289 00:18:56.289 Run status group 0 (all jobs): 00:18:56.289 READ: bw=130MiB/s (137MB/s), 130MiB/s-130MiB/s (137MB/s-137MB/s), io=262MiB (274MB), run=2006-2006msec 00:18:56.289 WRITE: bw=73.5MiB/s (77.1MB/s), 73.5MiB/s-73.5MiB/s (77.1MB/s-77.1MB/s), io=137MiB (144MB), run=1870-1870msec 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:56.289 16:14:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:18:56.566 Nvme0n1 00:18:56.566 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:56.824 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=f17f9aff-5c41-4a4a-90da-42f50ad05815 00:18:56.824 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb f17f9aff-5c41-4a4a-90da-42f50ad05815 00:18:56.824 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=f17f9aff-5c41-4a4a-90da-42f50ad05815 00:18:56.824 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:18:56.824 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:18:56.824 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:18:56.824 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:57.083 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:18:57.083 { 00:18:57.083 "uuid": "f17f9aff-5c41-4a4a-90da-42f50ad05815", 00:18:57.083 "name": "lvs_0", 00:18:57.083 "base_bdev": "Nvme0n1", 00:18:57.083 "total_data_clusters": 4, 00:18:57.083 "free_clusters": 4, 00:18:57.083 "block_size": 4096, 00:18:57.083 "cluster_size": 1073741824 00:18:57.083 } 00:18:57.083 ]' 00:18:57.083 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="f17f9aff-5c41-4a4a-90da-42f50ad05815") .free_clusters' 00:18:57.083 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:18:57.083 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="f17f9aff-5c41-4a4a-90da-42f50ad05815") .cluster_size' 00:18:57.083 4096 00:18:57.083 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:18:57.083 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:18:57.083 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:18:57.083 16:14:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:57.342 60f55fa5-7728-48c0-bf04-e9d0893a80ec 00:18:57.342 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:57.601 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:57.860 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.118 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.119 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.119 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:58.119 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:58.119 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:18:58.119 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:58.119 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:18:58.119 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:18:58.119 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:58.119 16:14:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:58.378 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:58.378 fio-3.35 00:18:58.378 Starting 1 thread 00:19:00.910 00:19:00.910 test: (groupid=0, jobs=1): err= 0: pid=90488: Tue Nov 19 16:14:07 2024 00:19:00.910 read: IOPS=6185, BW=24.2MiB/s (25.3MB/s)(48.6MiB/2010msec) 00:19:00.910 slat (nsec): min=1990, max=357152, avg=2757.52, stdev=4274.00 00:19:00.910 clat (usec): min=2952, max=19830, avg=10824.79, stdev=905.28 00:19:00.910 lat (usec): min=2962, max=19832, avg=10827.55, stdev=904.90 00:19:00.910 clat percentiles (usec): 00:19:00.910 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:19:00.910 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:19:00.910 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12256], 00:19:00.910 | 99.00th=[12780], 99.50th=[13173], 99.90th=[17433], 99.95th=[18482], 00:19:00.910 | 99.99th=[19792] 00:19:00.910 bw ( KiB/s): min=24056, max=25032, per=100.00%, avg=24740.00, stdev=458.66, samples=4 00:19:00.910 iops : min= 6014, max= 6258, avg=6185.00, stdev=114.66, samples=4 00:19:00.910 write: IOPS=6174, BW=24.1MiB/s (25.3MB/s)(48.5MiB/2010msec); 0 zone resets 00:19:00.910 slat (usec): min=2, max=246, avg= 2.91, stdev= 3.02 00:19:00.910 clat (usec): min=2428, max=18684, avg=9826.54, stdev=864.64 00:19:00.910 lat (usec): min=2442, max=18687, avg=9829.45, stdev=864.44 00:19:00.910 clat percentiles (usec): 00:19:00.910 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9241], 00:19:00.910 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9765], 60.00th=[10028], 00:19:00.910 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:19:00.910 | 99.00th=[11600], 99.50th=[11994], 99.90th=[17433], 99.95th=[18220], 00:19:00.910 | 99.99th=[18482] 00:19:00.910 bw ( KiB/s): min=24512, max=25032, per=99.97%, avg=24690.00, stdev=233.91, samples=4 00:19:00.910 iops : min= 6128, max= 6258, avg=6172.50, stdev=58.48, samples=4 00:19:00.910 lat (msec) : 4=0.06%, 10=37.08%, 20=62.86% 00:19:00.910 cpu : usr=71.53%, sys=21.55%, ctx=6, majf=0, minf=7 00:19:00.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:00.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:00.910 issued rwts: total=12432,12410,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.910 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:00.910 00:19:00.910 Run status group 0 (all jobs): 00:19:00.910 READ: bw=24.2MiB/s (25.3MB/s), 24.2MiB/s-24.2MiB/s (25.3MB/s-25.3MB/s), io=48.6MiB (50.9MB), run=2010-2010msec 00:19:00.910 WRITE: bw=24.1MiB/s (25.3MB/s), 24.1MiB/s-24.1MiB/s (25.3MB/s-25.3MB/s), io=48.5MiB (50.8MB), run=2010-2010msec 00:19:00.910 16:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:00.910 16:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:19:01.169 16:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=ca67721d-06eb-4709-87c7-d97d9d48be33 00:19:01.169 16:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb ca67721d-06eb-4709-87c7-d97d9d48be33 00:19:01.169 16:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=ca67721d-06eb-4709-87c7-d97d9d48be33 00:19:01.169 16:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:19:01.169 16:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:19:01.169 16:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:19:01.169 16:14:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:01.427 16:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:19:01.427 { 00:19:01.427 "uuid": "f17f9aff-5c41-4a4a-90da-42f50ad05815", 00:19:01.427 "name": "lvs_0", 00:19:01.427 "base_bdev": "Nvme0n1", 00:19:01.427 "total_data_clusters": 4, 00:19:01.427 "free_clusters": 0, 00:19:01.427 "block_size": 4096, 00:19:01.427 "cluster_size": 1073741824 00:19:01.427 }, 00:19:01.427 { 00:19:01.427 "uuid": "ca67721d-06eb-4709-87c7-d97d9d48be33", 00:19:01.427 "name": "lvs_n_0", 00:19:01.427 "base_bdev": "60f55fa5-7728-48c0-bf04-e9d0893a80ec", 00:19:01.427 "total_data_clusters": 1022, 00:19:01.427 "free_clusters": 1022, 00:19:01.427 "block_size": 4096, 00:19:01.427 "cluster_size": 4194304 00:19:01.427 } 00:19:01.427 ]' 00:19:01.427 16:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="ca67721d-06eb-4709-87c7-d97d9d48be33") .free_clusters' 00:19:01.427 16:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:19:01.427 16:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="ca67721d-06eb-4709-87c7-d97d9d48be33") .cluster_size' 00:19:01.686 4088 00:19:01.686 16:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:19:01.686 16:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:19:01.686 16:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:19:01.686 16:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:19:01.686 0e24b1e1-3de6-4962-a7de-319e94e6ecc3 00:19:01.686 16:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:19:02.253 16:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:19:02.253 16:14:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:02.512 16:14:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:02.771 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:02.771 fio-3.35 00:19:02.771 Starting 1 thread 00:19:05.303 00:19:05.303 test: (groupid=0, jobs=1): err= 0: pid=90573: Tue Nov 19 16:14:11 2024 00:19:05.303 read: IOPS=5508, BW=21.5MiB/s (22.6MB/s)(43.2MiB/2009msec) 00:19:05.303 slat (usec): min=2, max=338, avg= 2.90, stdev= 4.36 00:19:05.303 clat (usec): min=3311, max=22023, avg=12191.37, stdev=1027.82 00:19:05.303 lat (usec): min=3320, max=22026, avg=12194.27, stdev=1027.42 00:19:05.303 clat percentiles (usec): 00:19:05.303 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:19:05.303 | 30.00th=[11731], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:19:05.303 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13435], 95.00th=[13698], 00:19:05.303 | 99.00th=[14353], 99.50th=[14877], 99.90th=[18482], 99.95th=[21890], 00:19:05.303 | 99.99th=[21890] 00:19:05.303 bw ( KiB/s): min=20880, max=22488, per=99.79%, avg=21988.00, stdev=752.18, samples=4 00:19:05.303 iops : min= 5220, max= 5622, avg=5497.00, stdev=188.05, samples=4 00:19:05.303 write: IOPS=5470, BW=21.4MiB/s (22.4MB/s)(42.9MiB/2009msec); 0 zone resets 00:19:05.303 slat (usec): min=2, max=279, avg= 3.01, stdev= 3.27 00:19:05.303 clat (usec): min=2467, max=18602, avg=11035.16, stdev=941.24 00:19:05.303 lat (usec): min=2481, max=18605, avg=11038.18, stdev=941.06 00:19:05.303 clat percentiles (usec): 00:19:05.303 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10290], 00:19:05.303 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:19:05.303 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:19:05.303 | 99.00th=[13173], 99.50th=[13435], 99.90th=[16909], 99.95th=[18220], 00:19:05.303 | 99.99th=[18482] 00:19:05.303 bw ( KiB/s): min=21504, max=22328, per=99.96%, avg=21874.00, stdev=339.87, samples=4 00:19:05.303 iops : min= 5376, max= 5582, avg=5468.50, stdev=84.97, samples=4 00:19:05.303 lat (msec) : 4=0.05%, 10=6.02%, 20=93.89%, 50=0.05% 00:19:05.303 cpu : usr=71.81%, sys=21.76%, ctx=6, majf=0, minf=7 00:19:05.303 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:05.303 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:05.303 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:05.303 issued rwts: total=11067,10991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:05.303 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:05.303 00:19:05.303 Run status group 0 (all jobs): 00:19:05.303 READ: bw=21.5MiB/s (22.6MB/s), 21.5MiB/s-21.5MiB/s (22.6MB/s-22.6MB/s), io=43.2MiB (45.3MB), run=2009-2009msec 00:19:05.303 WRITE: bw=21.4MiB/s (22.4MB/s), 21.4MiB/s-21.4MiB/s (22.4MB/s-22.4MB/s), io=42.9MiB (45.0MB), run=2009-2009msec 00:19:05.303 16:14:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:05.303 16:14:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:19:05.303 16:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:19:05.561 16:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:06.127 16:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:19:06.127 16:14:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:06.693 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:07.259 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:07.259 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:07.259 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:07.259 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:07.259 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:07.259 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:07.259 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:07.259 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.259 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:07.259 rmmod nvme_tcp 00:19:07.259 rmmod nvme_fabrics 00:19:07.259 rmmod nvme_keyring 00:19:07.259 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:07.519 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:07.519 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:07.519 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 90266 ']' 00:19:07.519 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 90266 00:19:07.519 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 90266 ']' 00:19:07.519 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 90266 00:19:07.519 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:19:07.519 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.519 16:14:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90266 00:19:07.519 killing process with pid 90266 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90266' 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 90266 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 90266 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:07.519 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:07.778 00:19:07.778 real 0m19.378s 00:19:07.778 user 1m24.026s 00:19:07.778 sys 0m4.554s 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.778 ************************************ 00:19:07.778 END TEST nvmf_fio_host 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.778 ************************************ 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:07.778 ************************************ 00:19:07.778 START TEST nvmf_failover 00:19:07.778 ************************************ 00:19:07.778 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:08.038 * Looking for test storage... 00:19:08.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:08.038 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:08.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.039 --rc genhtml_branch_coverage=1 00:19:08.039 --rc genhtml_function_coverage=1 00:19:08.039 --rc genhtml_legend=1 00:19:08.039 --rc geninfo_all_blocks=1 00:19:08.039 --rc geninfo_unexecuted_blocks=1 00:19:08.039 00:19:08.039 ' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:08.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.039 --rc genhtml_branch_coverage=1 00:19:08.039 --rc genhtml_function_coverage=1 00:19:08.039 --rc genhtml_legend=1 00:19:08.039 --rc geninfo_all_blocks=1 00:19:08.039 --rc geninfo_unexecuted_blocks=1 00:19:08.039 00:19:08.039 ' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:08.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.039 --rc genhtml_branch_coverage=1 00:19:08.039 --rc genhtml_function_coverage=1 00:19:08.039 --rc genhtml_legend=1 00:19:08.039 --rc geninfo_all_blocks=1 00:19:08.039 --rc geninfo_unexecuted_blocks=1 00:19:08.039 00:19:08.039 ' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:08.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.039 --rc genhtml_branch_coverage=1 00:19:08.039 --rc genhtml_function_coverage=1 00:19:08.039 --rc genhtml_legend=1 00:19:08.039 --rc geninfo_all_blocks=1 00:19:08.039 --rc geninfo_unexecuted_blocks=1 00:19:08.039 00:19:08.039 ' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:08.039 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.039 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:08.040 Cannot find device "nvmf_init_br" 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:08.040 Cannot find device "nvmf_init_br2" 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:08.040 Cannot find device "nvmf_tgt_br" 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:08.040 Cannot find device "nvmf_tgt_br2" 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:08.040 Cannot find device "nvmf_init_br" 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:08.040 Cannot find device "nvmf_init_br2" 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:08.040 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:08.299 Cannot find device "nvmf_tgt_br" 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:08.299 Cannot find device "nvmf_tgt_br2" 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:08.299 Cannot find device "nvmf_br" 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:08.299 Cannot find device "nvmf_init_if" 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:08.299 Cannot find device "nvmf_init_if2" 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:08.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:08.299 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:08.299 16:14:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:08.559 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:08.559 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:19:08.559 00:19:08.559 --- 10.0.0.3 ping statistics --- 00:19:08.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.559 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:08.559 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:08.559 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:19:08.559 00:19:08.559 --- 10.0.0.4 ping statistics --- 00:19:08.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.559 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:08.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:08.559 00:19:08.559 --- 10.0.0.1 ping statistics --- 00:19:08.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.559 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:08.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:19:08.559 00:19:08.559 --- 10.0.0.2 ping statistics --- 00:19:08.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.559 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:08.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=90866 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 90866 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 90866 ']' 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.559 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:08.559 [2024-11-19 16:14:15.148057] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:19:08.559 [2024-11-19 16:14:15.148161] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.818 [2024-11-19 16:14:15.292327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:08.818 [2024-11-19 16:14:15.310743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.818 [2024-11-19 16:14:15.311011] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.818 [2024-11-19 16:14:15.311085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.818 [2024-11-19 16:14:15.311197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.818 [2024-11-19 16:14:15.311253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.818 [2024-11-19 16:14:15.312002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.818 [2024-11-19 16:14:15.312152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.818 [2024-11-19 16:14:15.312281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.818 [2024-11-19 16:14:15.341308] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:08.818 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.818 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:08.818 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:08.818 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.818 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:08.818 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.818 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:09.076 [2024-11-19 16:14:15.717834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.076 16:14:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:09.334 Malloc0 00:19:09.334 16:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:09.592 16:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:09.850 16:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:10.109 [2024-11-19 16:14:16.712467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:10.109 16:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:10.367 [2024-11-19 16:14:16.948708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:10.367 16:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:10.625 [2024-11-19 16:14:17.176773] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:10.625 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:10.625 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=90914 00:19:10.625 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:10.625 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 90914 /var/tmp/bdevperf.sock 00:19:10.625 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 90914 ']' 00:19:10.625 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.625 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.625 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.625 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.626 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:10.884 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.884 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:10.884 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:11.142 NVMe0n1 00:19:11.142 16:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:11.708 00:19:11.708 16:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=90930 00:19:11.708 16:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:11.708 16:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:12.652 16:14:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:12.921 16:14:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:16.205 16:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:16.205 00:19:16.205 16:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:16.463 16:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:19.760 16:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:19.760 [2024-11-19 16:14:26.361248] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:19.760 16:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:20.696 16:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:20.955 16:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 90930 00:19:27.589 { 00:19:27.589 "results": [ 00:19:27.589 { 00:19:27.589 "job": "NVMe0n1", 00:19:27.589 "core_mask": "0x1", 00:19:27.589 "workload": "verify", 00:19:27.589 "status": "finished", 00:19:27.589 "verify_range": { 00:19:27.589 "start": 0, 00:19:27.589 "length": 16384 00:19:27.589 }, 00:19:27.589 "queue_depth": 128, 00:19:27.589 "io_size": 4096, 00:19:27.589 "runtime": 15.009622, 00:19:27.589 "iops": 9221.08498135396, 00:19:27.589 "mibps": 36.01986320841391, 00:19:27.589 "io_failed": 3221, 00:19:27.589 "io_timeout": 0, 00:19:27.589 "avg_latency_us": 13533.992272502608, 00:19:27.589 "min_latency_us": 614.4, 00:19:27.589 "max_latency_us": 16324.421818181818 00:19:27.589 } 00:19:27.589 ], 00:19:27.589 "core_count": 1 00:19:27.589 } 00:19:27.589 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 90914 00:19:27.589 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 90914 ']' 00:19:27.589 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 90914 00:19:27.589 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:27.589 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.590 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90914 00:19:27.590 killing process with pid 90914 00:19:27.590 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.590 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.590 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90914' 00:19:27.590 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 90914 00:19:27.590 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 90914 00:19:27.590 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:27.590 [2024-11-19 16:14:17.237036] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:19:27.590 [2024-11-19 16:14:17.237111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90914 ] 00:19:27.590 [2024-11-19 16:14:17.386641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:27.590 [2024-11-19 16:14:17.410607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:27.590 [2024-11-19 16:14:17.445342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:27.590 Running I/O for 15 seconds... 00:19:27.590 6962.00 IOPS, 27.20 MiB/s [2024-11-19T16:14:34.305Z] [2024-11-19 16:14:19.425550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.425610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.425652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.425680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.425706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.425733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.425760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.425787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.425814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:68608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.425840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.425866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.425919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.425949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.425975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.425990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:68688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:68696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:68712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:68720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:68728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.426345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.426373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.426400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.426428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.426455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.426483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.426510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.590 [2024-11-19 16:14:19.426537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.590 [2024-11-19 16:14:19.426566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.590 [2024-11-19 16:14:19.426581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:68744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:68792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.426974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.426989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.591 [2024-11-19 16:14:19.427775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.591 [2024-11-19 16:14:19.427789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.591 [2024-11-19 16:14:19.427801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.427816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.427834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.427849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.427862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.427876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.427889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.427903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.427915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.427929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.427942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.427955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.427968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.427982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.427995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.592 [2024-11-19 16:14:19.428271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.592 [2024-11-19 16:14:19.428300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.592 [2024-11-19 16:14:19.428329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.592 [2024-11-19 16:14:19.428357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.592 [2024-11-19 16:14:19.428385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.592 [2024-11-19 16:14:19.428412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.592 [2024-11-19 16:14:19.428440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.592 [2024-11-19 16:14:19.428467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.592 [2024-11-19 16:14:19.428683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c17730 is same with the state(6) to be set 00:19:27.592 [2024-11-19 16:14:19.428712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.592 [2024-11-19 16:14:19.428722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.592 [2024-11-19 16:14:19.428732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69112 len:8 PRP1 0x0 PRP2 0x0 00:19:27.592 [2024-11-19 16:14:19.428744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.592 [2024-11-19 16:14:19.428767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.592 [2024-11-19 16:14:19.428776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69440 len:8 PRP1 0x0 PRP2 0x0 00:19:27.592 [2024-11-19 16:14:19.428789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.592 [2024-11-19 16:14:19.428828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.592 [2024-11-19 16:14:19.428837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69448 len:8 PRP1 0x0 PRP2 0x0 00:19:27.592 [2024-11-19 16:14:19.428850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.592 [2024-11-19 16:14:19.428872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.592 [2024-11-19 16:14:19.428882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69456 len:8 PRP1 0x0 PRP2 0x0 00:19:27.592 [2024-11-19 16:14:19.428894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.592 [2024-11-19 16:14:19.428916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.592 [2024-11-19 16:14:19.428926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69464 len:8 PRP1 0x0 PRP2 0x0 00:19:27.592 [2024-11-19 16:14:19.428944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.592 [2024-11-19 16:14:19.428960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.428970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.428979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69472 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.428992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69480 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69488 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69496 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69504 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429202] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69512 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69520 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69528 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69536 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69544 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69552 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69560 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69568 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69576 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69584 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69592 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69600 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69608 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429792] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69616 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.593 [2024-11-19 16:14:19.429834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.593 [2024-11-19 16:14:19.429844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69624 len:8 PRP1 0x0 PRP2 0x0 00:19:27.593 [2024-11-19 16:14:19.429856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429900] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:27.593 [2024-11-19 16:14:19.429955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.593 [2024-11-19 16:14:19.429976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.429990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.593 [2024-11-19 16:14:19.430003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.430016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.593 [2024-11-19 16:14:19.430028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.430041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.593 [2024-11-19 16:14:19.430054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.593 [2024-11-19 16:14:19.430067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:19:27.593 [2024-11-19 16:14:19.433543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:27.593 [2024-11-19 16:14:19.433579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf5a80 (9): Bad file descriptor 00:19:27.594 [2024-11-19 16:14:19.456915] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:27.594 8111.00 IOPS, 31.68 MiB/s [2024-11-19T16:14:34.309Z] 8751.33 IOPS, 34.18 MiB/s [2024-11-19T16:14:34.309Z] 8707.50 IOPS, 34.01 MiB/s [2024-11-19T16:14:34.309Z] [2024-11-19 16:14:23.079568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.079635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.079667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.079685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.079702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.079718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.079734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.079749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.079765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.079781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.079797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.079812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.079828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.079843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.079859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.079875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.079892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.594 [2024-11-19 16:14:23.079906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.079923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.594 [2024-11-19 16:14:23.079938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.079955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.594 [2024-11-19 16:14:23.079970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.594 [2024-11-19 16:14:23.080031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.594 [2024-11-19 16:14:23.080062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.594 [2024-11-19 16:14:23.080094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.594 [2024-11-19 16:14:23.080124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.594 [2024-11-19 16:14:23.080155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.594 [2024-11-19 16:14:23.080663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.594 [2024-11-19 16:14:23.080679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.080693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.080708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.080722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.080739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.080753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.080769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.080783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.080815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.080830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.080846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.080867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.080885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.080900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.080916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.080931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.080947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.080961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.080977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.080992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.595 [2024-11-19 16:14:23.081898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.595 [2024-11-19 16:14:23.081973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.595 [2024-11-19 16:14:23.081987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.082463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:88480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.082974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:88488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.082989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:88496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.083020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.083059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.083092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.596 [2024-11-19 16:14:23.083123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.083179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.083208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.083236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.083293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.083338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.083368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.596 [2024-11-19 16:14:23.083399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.596 [2024-11-19 16:14:23.083415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c19820 is same with the state(6) to be set 00:19:27.596 [2024-11-19 16:14:23.083432] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.596 [2024-11-19 16:14:23.083444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.083455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88024 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.083469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.083484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.083495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.083514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88528 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.083529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.083544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.083555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.083566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88536 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.083579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.083593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.083604] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.083630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88544 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.083658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.083692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.083702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.083712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88552 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.083724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.083737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.083764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.083774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88560 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.083787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.083800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.083811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.083821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88568 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.083834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.083847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.083857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.083867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88576 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.083880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.083893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.083903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.083913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88584 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.083926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.083939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.083970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.083984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88592 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.083998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.084023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.084034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88600 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.084048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.084072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.084082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88608 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.084096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.084122] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.084132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88616 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.084145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.084185] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.084195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88624 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.084208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.084231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.084241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88632 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.084271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.084296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.084307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88640 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.084321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.084357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.084369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88648 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.084383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.084417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.084428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88656 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.084443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.597 [2024-11-19 16:14:23.084468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.597 [2024-11-19 16:14:23.084479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88664 len:8 PRP1 0x0 PRP2 0x0 00:19:27.597 [2024-11-19 16:14:23.084500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084609] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:27.597 [2024-11-19 16:14:23.084707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.597 [2024-11-19 16:14:23.084730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.597 [2024-11-19 16:14:23.084763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.597 [2024-11-19 16:14:23.084792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.597 [2024-11-19 16:14:23.084820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.597 [2024-11-19 16:14:23.084834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:27.597 [2024-11-19 16:14:23.084870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf5a80 (9): Bad file descriptor 00:19:27.597 [2024-11-19 16:14:23.088960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:27.597 [2024-11-19 16:14:23.120274] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:19:27.597 8804.00 IOPS, 34.39 MiB/s [2024-11-19T16:14:34.312Z] 8979.33 IOPS, 35.08 MiB/s [2024-11-19T16:14:34.312Z] 9010.86 IOPS, 35.20 MiB/s [2024-11-19T16:14:34.312Z] 9036.50 IOPS, 35.30 MiB/s [2024-11-19T16:14:34.312Z] 9103.56 IOPS, 35.56 MiB/s [2024-11-19T16:14:34.312Z] [2024-11-19 16:14:27.648621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.597 [2024-11-19 16:14:27.648700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.648743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.648758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.648773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.648808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.648825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:49208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.648838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.648853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:49216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.648866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.648880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.648893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.648907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.648920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.648934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.648947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.648961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.648974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.648988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.649001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.649028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.649055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:49280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.649082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:49288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.598 [2024-11-19 16:14:27.649108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:49648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:49680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:49712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:49728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:49744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:49776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.598 [2024-11-19 16:14:27.649714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.598 [2024-11-19 16:14:27.649729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.649742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.649756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:49792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.649769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.649783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.649796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.649810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.649823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.649837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.649849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.649864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.649883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.649898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.649911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.649925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.649938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.649952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.649965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.649979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.649992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:49824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:49896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:49464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.599 [2024-11-19 16:14:27.650910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.599 [2024-11-19 16:14:27.650925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.599 [2024-11-19 16:14:27.650940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.650955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.650968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.650983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.650997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:49928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:49960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.600 [2024-11-19 16:14:27.651560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.600 [2024-11-19 16:14:27.651588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.600 [2024-11-19 16:14:27.651616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.600 [2024-11-19 16:14:27.651643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.600 [2024-11-19 16:14:27.651670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.600 [2024-11-19 16:14:27.651698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.600 [2024-11-19 16:14:27.651725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.600 [2024-11-19 16:14:27.651753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.651984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.651999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.652013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.652028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.652041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.652056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.652069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.652083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.600 [2024-11-19 16:14:27.652097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.600 [2024-11-19 16:14:27.652111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.601 [2024-11-19 16:14:27.652124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.601 [2024-11-19 16:14:27.652152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.601 [2024-11-19 16:14:27.652191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.601 [2024-11-19 16:14:27.652219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.601 [2024-11-19 16:14:27.652258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:27.601 [2024-11-19 16:14:27.652286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.601 [2024-11-19 16:14:27.652314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.601 [2024-11-19 16:14:27.652342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.601 [2024-11-19 16:14:27.652370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.601 [2024-11-19 16:14:27.652397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.601 [2024-11-19 16:14:27.652428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:49592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.601 [2024-11-19 16:14:27.652456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:49600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:27.601 [2024-11-19 16:14:27.652486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:27.601 [2024-11-19 16:14:27.652548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:27.601 [2024-11-19 16:14:27.652559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49608 len:8 PRP1 0x0 PRP2 0x0 00:19:27.601 [2024-11-19 16:14:27.652572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652628] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:27.601 [2024-11-19 16:14:27.652696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.601 [2024-11-19 16:14:27.652717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.601 [2024-11-19 16:14:27.652745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.601 [2024-11-19 16:14:27.652772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:27.601 [2024-11-19 16:14:27.652798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:27.601 [2024-11-19 16:14:27.652811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:27.601 [2024-11-19 16:14:27.656453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:27.601 [2024-11-19 16:14:27.656492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf5a80 (9): Bad file descriptor 00:19:27.601 [2024-11-19 16:14:27.678994] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:19:27.601 9154.90 IOPS, 35.76 MiB/s [2024-11-19T16:14:34.316Z] 9106.64 IOPS, 35.57 MiB/s [2024-11-19T16:14:34.316Z] 9155.08 IOPS, 35.76 MiB/s [2024-11-19T16:14:34.316Z] 9209.00 IOPS, 35.97 MiB/s [2024-11-19T16:14:34.316Z] 9216.93 IOPS, 36.00 MiB/s [2024-11-19T16:14:34.316Z] 9219.53 IOPS, 36.01 MiB/s 00:19:27.601 Latency(us) 00:19:27.601 [2024-11-19T16:14:34.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.601 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:27.601 Verification LBA range: start 0x0 length 0x4000 00:19:27.601 NVMe0n1 : 15.01 9221.08 36.02 214.60 0.00 13533.99 614.40 16324.42 00:19:27.601 [2024-11-19T16:14:34.316Z] =================================================================================================================== 00:19:27.601 [2024-11-19T16:14:34.316Z] Total : 9221.08 36.02 214.60 0.00 13533.99 614.40 16324.42 00:19:27.601 Received shutdown signal, test time was about 15.000000 seconds 00:19:27.601 00:19:27.601 Latency(us) 00:19:27.601 [2024-11-19T16:14:34.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.601 [2024-11-19T16:14:34.316Z] =================================================================================================================== 00:19:27.601 [2024-11-19T16:14:34.316Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:27.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=91104 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 91104 /var/tmp/bdevperf.sock 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 91104 ']' 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:19:27.601 16:14:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:27.601 [2024-11-19 16:14:34.034524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:27.601 16:14:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:27.860 [2024-11-19 16:14:34.326880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:27.860 16:14:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:28.118 NVMe0n1 00:19:28.118 16:14:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:28.376 00:19:28.376 16:14:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:19:28.635 00:19:28.635 16:14:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:28.635 16:14:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:28.894 16:14:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:29.152 16:14:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:32.441 16:14:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:32.441 16:14:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:32.699 16:14:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=91173 00:19:32.699 16:14:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.699 16:14:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 91173 00:19:33.635 { 00:19:33.635 "results": [ 00:19:33.635 { 00:19:33.635 "job": "NVMe0n1", 00:19:33.635 "core_mask": "0x1", 00:19:33.635 "workload": "verify", 00:19:33.635 "status": "finished", 00:19:33.635 "verify_range": { 00:19:33.635 "start": 0, 00:19:33.635 "length": 16384 00:19:33.635 }, 00:19:33.635 "queue_depth": 128, 00:19:33.635 "io_size": 4096, 00:19:33.635 "runtime": 1.009332, 00:19:33.635 "iops": 6974.910138586709, 00:19:33.635 "mibps": 27.24574272885433, 00:19:33.635 "io_failed": 0, 00:19:33.635 "io_timeout": 0, 00:19:33.635 "avg_latency_us": 18234.706512396693, 00:19:33.635 "min_latency_us": 1385.1927272727273, 00:19:33.635 "max_latency_us": 16681.890909090907 00:19:33.635 } 00:19:33.635 ], 00:19:33.635 "core_count": 1 00:19:33.635 } 00:19:33.635 16:14:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:33.635 [2024-11-19 16:14:33.513402] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:19:33.635 [2024-11-19 16:14:33.514122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91104 ] 00:19:33.635 [2024-11-19 16:14:33.663403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.635 [2024-11-19 16:14:33.682922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:33.635 [2024-11-19 16:14:33.710955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:33.635 [2024-11-19 16:14:35.828096] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:33.635 [2024-11-19 16:14:35.828230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.635 [2024-11-19 16:14:35.828271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.635 [2024-11-19 16:14:35.828289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.635 [2024-11-19 16:14:35.828302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.635 [2024-11-19 16:14:35.828348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.635 [2024-11-19 16:14:35.828365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.635 [2024-11-19 16:14:35.828379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.635 [2024-11-19 16:14:35.828392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.635 [2024-11-19 16:14:35.828406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:19:33.635 [2024-11-19 16:14:35.828453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:19:33.635 [2024-11-19 16:14:35.828484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db7a80 (9): Bad file descriptor 00:19:33.635 [2024-11-19 16:14:35.837090] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:19:33.635 Running I/O for 1 seconds... 00:19:33.635 6912.00 IOPS, 27.00 MiB/s 00:19:33.635 Latency(us) 00:19:33.635 [2024-11-19T16:14:40.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.635 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:33.635 Verification LBA range: start 0x0 length 0x4000 00:19:33.635 NVMe0n1 : 1.01 6974.91 27.25 0.00 0.00 18234.71 1385.19 16681.89 00:19:33.635 [2024-11-19T16:14:40.350Z] =================================================================================================================== 00:19:33.635 [2024-11-19T16:14:40.350Z] Total : 6974.91 27.25 0.00 0.00 18234.71 1385.19 16681.89 00:19:33.635 16:14:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.635 16:14:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:34.203 16:14:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:34.461 16:14:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:34.461 16:14:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:34.719 16:14:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:34.978 16:14:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 91104 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 91104 ']' 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 91104 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91104 00:19:38.261 killing process with pid 91104 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91104' 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 91104 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 91104 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:38.261 16:14:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:38.519 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:38.519 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:38.519 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:38.519 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:38.519 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:38.519 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:38.519 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:38.519 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:38.519 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:38.778 rmmod nvme_tcp 00:19:38.778 rmmod nvme_fabrics 00:19:38.778 rmmod nvme_keyring 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 90866 ']' 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 90866 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 90866 ']' 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 90866 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90866 00:19:38.778 killing process with pid 90866 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90866' 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 90866 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 90866 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:38.778 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:38.779 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:38.779 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:19:38.779 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:38.779 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:19:38.779 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:38.779 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:38.779 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:38.779 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:39.038 ************************************ 00:19:39.038 END TEST nvmf_failover 00:19:39.038 ************************************ 00:19:39.038 00:19:39.038 real 0m31.258s 00:19:39.038 user 2m0.743s 00:19:39.038 sys 0m5.400s 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.038 ************************************ 00:19:39.038 START TEST nvmf_host_discovery 00:19:39.038 ************************************ 00:19:39.038 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:39.300 * Looking for test storage... 00:19:39.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:39.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.300 --rc genhtml_branch_coverage=1 00:19:39.300 --rc genhtml_function_coverage=1 00:19:39.300 --rc genhtml_legend=1 00:19:39.300 --rc geninfo_all_blocks=1 00:19:39.300 --rc geninfo_unexecuted_blocks=1 00:19:39.300 00:19:39.300 ' 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:39.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.300 --rc genhtml_branch_coverage=1 00:19:39.300 --rc genhtml_function_coverage=1 00:19:39.300 --rc genhtml_legend=1 00:19:39.300 --rc geninfo_all_blocks=1 00:19:39.300 --rc geninfo_unexecuted_blocks=1 00:19:39.300 00:19:39.300 ' 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:39.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.300 --rc genhtml_branch_coverage=1 00:19:39.300 --rc genhtml_function_coverage=1 00:19:39.300 --rc genhtml_legend=1 00:19:39.300 --rc geninfo_all_blocks=1 00:19:39.300 --rc geninfo_unexecuted_blocks=1 00:19:39.300 00:19:39.300 ' 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:39.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:39.300 --rc genhtml_branch_coverage=1 00:19:39.300 --rc genhtml_function_coverage=1 00:19:39.300 --rc genhtml_legend=1 00:19:39.300 --rc geninfo_all_blocks=1 00:19:39.300 --rc geninfo_unexecuted_blocks=1 00:19:39.300 00:19:39.300 ' 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:19:39.300 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:39.301 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:39.301 Cannot find device "nvmf_init_br" 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:39.301 Cannot find device "nvmf_init_br2" 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:39.301 Cannot find device "nvmf_tgt_br" 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:39.301 16:14:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:39.301 Cannot find device "nvmf_tgt_br2" 00:19:39.301 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:39.301 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:39.580 Cannot find device "nvmf_init_br" 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:39.580 Cannot find device "nvmf_init_br2" 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:39.580 Cannot find device "nvmf_tgt_br" 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:39.580 Cannot find device "nvmf_tgt_br2" 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:39.580 Cannot find device "nvmf_br" 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:39.580 Cannot find device "nvmf_init_if" 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:39.580 Cannot find device "nvmf_init_if2" 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:39.580 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:39.580 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:39.581 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:39.581 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:39.581 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:39.581 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:19:39.581 00:19:39.581 --- 10.0.0.3 ping statistics --- 00:19:39.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.581 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:39.852 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:39.852 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:19:39.852 00:19:39.852 --- 10.0.0.4 ping statistics --- 00:19:39.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.852 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:39.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:39.852 00:19:39.852 --- 10.0.0.1 ping statistics --- 00:19:39.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.852 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:39.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:19:39.852 00:19:39.852 --- 10.0.0.2 ping statistics --- 00:19:39.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.852 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.852 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=91496 00:19:39.853 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:39.853 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 91496 00:19:39.853 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 91496 ']' 00:19:39.853 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.853 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.853 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.853 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.853 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:39.853 [2024-11-19 16:14:46.374019] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:19:39.853 [2024-11-19 16:14:46.374326] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.853 [2024-11-19 16:14:46.514378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.853 [2024-11-19 16:14:46.531782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.853 [2024-11-19 16:14:46.531837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.853 [2024-11-19 16:14:46.531863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.853 [2024-11-19 16:14:46.531870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.853 [2024-11-19 16:14:46.531876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.853 [2024-11-19 16:14:46.532127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.853 [2024-11-19 16:14:46.559800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.112 [2024-11-19 16:14:46.679240] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.112 [2024-11-19 16:14:46.687406] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.112 null0 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.112 null1 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.112 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=91521 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 91521 /tmp/host.sock 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 91521 ']' 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:40.112 16:14:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:40.112 [2024-11-19 16:14:46.779606] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:19:40.112 [2024-11-19 16:14:46.779958] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91521 ] 00:19:40.371 [2024-11-19 16:14:46.934470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.371 [2024-11-19 16:14:46.958649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.371 [2024-11-19 16:14:46.992762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.305 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.306 16:14:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.565 [2024-11-19 16:14:48.143755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.565 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:41.824 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:41.825 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:41.825 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:41.825 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.825 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:41.825 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.825 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:41.825 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.825 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:19:41.825 16:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:19:42.083 [2024-11-19 16:14:48.792209] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:42.083 [2024-11-19 16:14:48.792250] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:42.083 [2024-11-19 16:14:48.792287] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:42.342 [2024-11-19 16:14:48.798261] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:42.342 [2024-11-19 16:14:48.852637] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:19:42.342 [2024-11-19 16:14:48.853685] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x85d4b0:1 started. 00:19:42.342 [2024-11-19 16:14:48.855603] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:42.342 [2024-11-19 16:14:48.855798] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:42.342 [2024-11-19 16:14:48.860685] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x85d4b0 was disconnected and freed. delete nvme_qpair. 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:42.908 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:42.909 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:42.909 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:42.909 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:42.909 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:42.909 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:42.909 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:42.909 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:42.909 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:42.909 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.909 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.909 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.168 [2024-11-19 16:14:49.634444] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x82b170:1 started. 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.168 [2024-11-19 16:14:49.641206] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x82b170 was disconnected and freed. delete nvme_qpair. 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.168 [2024-11-19 16:14:49.749038] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:43.168 [2024-11-19 16:14:49.749541] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:43.168 [2024-11-19 16:14:49.749569] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:43.168 [2024-11-19 16:14:49.755538] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.168 [2024-11-19 16:14:49.815928] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:19:43.168 [2024-11-19 16:14:49.816122] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:43.168 [2024-11-19 16:14:49.816138] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:43.168 [2024-11-19 16:14:49.816144] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:43.168 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.427 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.428 [2024-11-19 16:14:49.982183] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:43.428 [2024-11-19 16:14:49.982215] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:43.428 [2024-11-19 16:14:49.982838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.428 [2024-11-19 16:14:49.982884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.428 [2024-11-19 16:14:49.982898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.428 [2024-11-19 16:14:49.982908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.428 [2024-11-19 16:14:49.982918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.428 [2024-11-19 16:14:49.982926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.428 [2024-11-19 16:14:49.982936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:43.428 [2024-11-19 16:14:49.982945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:43.428 [2024-11-19 16:14:49.982954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82d6e0 is same with the state(6) to be set 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:43.428 [2024-11-19 16:14:49.988184] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:43.428 [2024-11-19 16:14:49.988216] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:43.428 [2024-11-19 16:14:49.988328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x82d6e0 (9): Bad file descriptor 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:43.428 16:14:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:43.428 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:43.688 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.947 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:43.947 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:43.947 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:19:43.947 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:19:43.947 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:43.947 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.947 16:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.883 [2024-11-19 16:14:51.412801] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:44.883 [2024-11-19 16:14:51.412834] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:44.883 [2024-11-19 16:14:51.412851] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:44.883 [2024-11-19 16:14:51.418861] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:44.883 [2024-11-19 16:14:51.477181] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:19:44.883 [2024-11-19 16:14:51.477893] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x861450:1 started. 00:19:44.883 [2024-11-19 16:14:51.479992] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:44.883 [2024-11-19 16:14:51.480197] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:44.883 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.883 [2024-11-19 16:14:51.481793] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x861450 was disconnected and freed. delete nvme_qpair. 00:19:44.883 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:44.883 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:44.883 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:44.883 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:44.883 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.883 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:44.883 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:44.883 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:44.883 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.883 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.883 request: 00:19:44.883 { 00:19:44.883 "name": "nvme", 00:19:44.883 "trtype": "tcp", 00:19:44.883 "traddr": "10.0.0.3", 00:19:44.883 "adrfam": "ipv4", 00:19:44.883 "trsvcid": "8009", 00:19:44.883 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:44.883 "wait_for_attach": true, 00:19:44.883 "method": "bdev_nvme_start_discovery", 00:19:44.883 "req_id": 1 00:19:44.884 } 00:19:44.884 Got JSON-RPC error response 00:19:44.884 response: 00:19:44.884 { 00:19:44.884 "code": -17, 00:19:44.884 "message": "File exists" 00:19:44.884 } 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.884 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.143 request: 00:19:45.143 { 00:19:45.143 "name": "nvme_second", 00:19:45.143 "trtype": "tcp", 00:19:45.143 "traddr": "10.0.0.3", 00:19:45.143 "adrfam": "ipv4", 00:19:45.143 "trsvcid": "8009", 00:19:45.143 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:45.143 "wait_for_attach": true, 00:19:45.143 "method": "bdev_nvme_start_discovery", 00:19:45.143 "req_id": 1 00:19:45.143 } 00:19:45.143 Got JSON-RPC error response 00:19:45.143 response: 00:19:45.143 { 00:19:45.143 "code": -17, 00:19:45.143 "message": "File exists" 00:19:45.143 } 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.143 16:14:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.080 [2024-11-19 16:14:52.764556] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.080 [2024-11-19 16:14:52.764820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8479e0 with addr=10.0.0.3, port=8010 00:19:46.080 [2024-11-19 16:14:52.764850] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:46.080 [2024-11-19 16:14:52.764861] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:46.080 [2024-11-19 16:14:52.764870] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:47.455 [2024-11-19 16:14:53.764558] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:47.455 [2024-11-19 16:14:53.764833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8479e0 with addr=10.0.0.3, port=8010 00:19:47.455 [2024-11-19 16:14:53.764864] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:47.455 [2024-11-19 16:14:53.764876] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:47.455 [2024-11-19 16:14:53.764886] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:48.391 [2024-11-19 16:14:54.764421] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:48.391 request: 00:19:48.391 { 00:19:48.391 "name": "nvme_second", 00:19:48.391 "trtype": "tcp", 00:19:48.391 "traddr": "10.0.0.3", 00:19:48.391 "adrfam": "ipv4", 00:19:48.391 "trsvcid": "8010", 00:19:48.391 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:48.391 "wait_for_attach": false, 00:19:48.391 "attach_timeout_ms": 3000, 00:19:48.391 "method": "bdev_nvme_start_discovery", 00:19:48.391 "req_id": 1 00:19:48.391 } 00:19:48.391 Got JSON-RPC error response 00:19:48.391 response: 00:19:48.391 { 00:19:48.391 "code": -110, 00:19:48.391 "message": "Connection timed out" 00:19:48.391 } 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 91521 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:48.391 rmmod nvme_tcp 00:19:48.391 rmmod nvme_fabrics 00:19:48.391 rmmod nvme_keyring 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 91496 ']' 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 91496 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 91496 ']' 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 91496 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91496 00:19:48.391 killing process with pid 91496 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91496' 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 91496 00:19:48.391 16:14:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 91496 00:19:48.392 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:48.392 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:48.392 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:48.392 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:48.392 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:48.392 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:19:48.392 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:19:48.392 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:48.392 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:48.392 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:48.651 00:19:48.651 real 0m9.609s 00:19:48.651 user 0m18.625s 00:19:48.651 sys 0m1.957s 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.651 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.651 ************************************ 00:19:48.651 END TEST nvmf_host_discovery 00:19:48.651 ************************************ 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:48.910 ************************************ 00:19:48.910 START TEST nvmf_host_multipath_status 00:19:48.910 ************************************ 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:48.910 * Looking for test storage... 00:19:48.910 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.910 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:48.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.911 --rc genhtml_branch_coverage=1 00:19:48.911 --rc genhtml_function_coverage=1 00:19:48.911 --rc genhtml_legend=1 00:19:48.911 --rc geninfo_all_blocks=1 00:19:48.911 --rc geninfo_unexecuted_blocks=1 00:19:48.911 00:19:48.911 ' 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:48.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.911 --rc genhtml_branch_coverage=1 00:19:48.911 --rc genhtml_function_coverage=1 00:19:48.911 --rc genhtml_legend=1 00:19:48.911 --rc geninfo_all_blocks=1 00:19:48.911 --rc geninfo_unexecuted_blocks=1 00:19:48.911 00:19:48.911 ' 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:48.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.911 --rc genhtml_branch_coverage=1 00:19:48.911 --rc genhtml_function_coverage=1 00:19:48.911 --rc genhtml_legend=1 00:19:48.911 --rc geninfo_all_blocks=1 00:19:48.911 --rc geninfo_unexecuted_blocks=1 00:19:48.911 00:19:48.911 ' 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:48.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.911 --rc genhtml_branch_coverage=1 00:19:48.911 --rc genhtml_function_coverage=1 00:19:48.911 --rc genhtml_legend=1 00:19:48.911 --rc geninfo_all_blocks=1 00:19:48.911 --rc geninfo_unexecuted_blocks=1 00:19:48.911 00:19:48.911 ' 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:48.911 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:48.911 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:48.912 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:49.170 Cannot find device "nvmf_init_br" 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:49.170 Cannot find device "nvmf_init_br2" 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:49.170 Cannot find device "nvmf_tgt_br" 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:49.170 Cannot find device "nvmf_tgt_br2" 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:49.170 Cannot find device "nvmf_init_br" 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:49.170 Cannot find device "nvmf_init_br2" 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:49.170 Cannot find device "nvmf_tgt_br" 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:49.170 Cannot find device "nvmf_tgt_br2" 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:49.170 Cannot find device "nvmf_br" 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:49.170 Cannot find device "nvmf_init_if" 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:49.170 Cannot find device "nvmf_init_if2" 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:49.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:49.170 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:49.170 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:49.171 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:49.429 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:49.429 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:49.429 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:49.429 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:49.429 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:49.429 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:49.430 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:49.430 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:19:49.430 00:19:49.430 --- 10.0.0.3 ping statistics --- 00:19:49.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.430 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:49.430 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:49.430 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:19:49.430 00:19:49.430 --- 10.0.0.4 ping statistics --- 00:19:49.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.430 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:49.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:19:49.430 00:19:49.430 --- 10.0.0.1 ping statistics --- 00:19:49.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.430 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:49.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:19:49.430 00:19:49.430 --- 10.0.0.2 ping statistics --- 00:19:49.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.430 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=92029 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 92029 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 92029 ']' 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:49.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:49.430 16:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:49.430 [2024-11-19 16:14:56.041598] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:19:49.430 [2024-11-19 16:14:56.041701] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.689 [2024-11-19 16:14:56.198192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:49.689 [2024-11-19 16:14:56.222607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.689 [2024-11-19 16:14:56.222670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.689 [2024-11-19 16:14:56.222684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.689 [2024-11-19 16:14:56.222693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.689 [2024-11-19 16:14:56.222702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.689 [2024-11-19 16:14:56.223660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.689 [2024-11-19 16:14:56.223675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.689 [2024-11-19 16:14:56.260287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:49.689 16:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:49.689 16:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:49.689 16:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:49.689 16:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.689 16:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:49.689 16:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.689 16:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=92029 00:19:49.689 16:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:49.947 [2024-11-19 16:14:56.636459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.947 16:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:50.512 Malloc0 00:19:50.513 16:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:50.513 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:50.771 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:51.029 [2024-11-19 16:14:57.680759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:51.029 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:51.288 [2024-11-19 16:14:57.916947] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:51.288 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=92077 00:19:51.288 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:51.288 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.288 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 92077 /var/tmp/bdevperf.sock 00:19:51.288 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 92077 ']' 00:19:51.288 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.288 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.288 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.288 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.288 16:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:52.223 16:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.223 16:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:19:52.223 16:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:52.481 16:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:53.046 Nvme0n1 00:19:53.046 16:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:53.304 Nvme0n1 00:19:53.304 16:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:53.304 16:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:55.207 16:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:55.207 16:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:55.466 16:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:55.724 16:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:57.101 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:57.101 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:57.101 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.101 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:57.101 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.101 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:57.101 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.101 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:57.359 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:57.359 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:57.359 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.359 16:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:57.616 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.616 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:57.616 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:57.616 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.874 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.874 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:57.874 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.874 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:58.133 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.133 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:58.133 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.133 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:58.392 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.392 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:58.392 16:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:58.650 16:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:58.909 16:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:59.913 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:59.913 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:59.913 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:59.913 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:00.172 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:00.172 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:00.172 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.172 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:00.431 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.431 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:00.431 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.431 16:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:00.690 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.690 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:00.690 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.690 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:00.949 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.949 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:00.949 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.949 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:00.949 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.949 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:00.949 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.949 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:01.208 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.208 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:01.208 16:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:01.468 16:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:01.727 16:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:02.678 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:02.678 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:02.678 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.678 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:02.936 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:02.936 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:02.936 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:02.936 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:03.195 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:03.195 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:03.195 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.195 16:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:03.787 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:03.787 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:03.787 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.787 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:03.787 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:03.787 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:03.787 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:03.787 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.045 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.045 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:04.045 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.045 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:04.304 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.304 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:04.304 16:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:04.562 16:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:04.821 16:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:06.196 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:06.196 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:06.196 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.196 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:06.196 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.196 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:06.196 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.196 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:06.454 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:06.454 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:06.454 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.454 16:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:06.713 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.713 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:06.713 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.713 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:06.971 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.971 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:06.971 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:06.971 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.230 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.230 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:07.230 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.230 16:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:07.489 16:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:07.489 16:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:07.489 16:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:07.747 16:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:08.006 16:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:08.940 16:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:08.940 16:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:08.940 16:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:08.940 16:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:09.198 16:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:09.198 16:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:09.457 16:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.457 16:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:09.715 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:09.715 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:09.715 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.716 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:09.974 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.974 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:09.974 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.974 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:09.974 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:09.974 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:09.974 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.974 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:10.233 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:10.233 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:10.233 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.233 16:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:10.491 16:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:10.491 16:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:10.491 16:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:10.750 16:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:11.008 16:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:12.385 16:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:12.385 16:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:12.385 16:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.385 16:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:12.385 16:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:12.385 16:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:12.385 16:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.385 16:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:12.643 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.643 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:12.643 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.643 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:12.902 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.902 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:12.902 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.902 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:13.161 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.161 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:13.161 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.161 16:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:13.420 16:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:13.420 16:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:13.420 16:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.420 16:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:13.679 16:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.679 16:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:13.938 16:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:13.938 16:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:14.197 16:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:14.456 16:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:15.392 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:15.393 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:15.393 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.393 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:15.651 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.651 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:15.651 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.651 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:15.910 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:15.910 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:15.910 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:15.910 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:16.169 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.169 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:16.169 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.169 16:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:16.738 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.738 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:16.738 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.738 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:16.738 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.738 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:16.738 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.738 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:16.997 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.997 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:16.997 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:17.257 16:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:17.516 16:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:18.921 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:18.921 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:18.921 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.921 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:18.921 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:18.921 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:18.921 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:18.921 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.180 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.180 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:19.180 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.180 16:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:19.439 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.439 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:19.439 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.439 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:19.698 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.698 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:19.698 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.698 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:19.957 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.957 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:19.957 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.957 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:20.216 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.216 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:20.216 16:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:20.475 16:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:20.734 16:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:21.671 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:21.671 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:21.671 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.671 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:21.930 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.930 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:21.930 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.930 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:22.190 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.190 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:22.190 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.190 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:22.449 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.449 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:22.449 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.449 16:15:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:22.709 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.709 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:22.709 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.709 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:22.967 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.967 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:22.967 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:22.967 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:23.227 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.227 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:23.227 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:23.486 16:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:23.744 16:15:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:24.680 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:24.681 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:24.681 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.681 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:24.940 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:24.940 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:24.940 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:24.940 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.199 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:25.199 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:25.199 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:25.199 16:15:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.458 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.458 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:25.458 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.458 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:25.718 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.718 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:25.718 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.718 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:25.977 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.977 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:25.977 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.977 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 92077 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 92077 ']' 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 92077 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92077 00:20:26.236 killing process with pid 92077 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92077' 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 92077 00:20:26.236 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 92077 00:20:26.236 { 00:20:26.236 "results": [ 00:20:26.236 { 00:20:26.236 "job": "Nvme0n1", 00:20:26.236 "core_mask": "0x4", 00:20:26.236 "workload": "verify", 00:20:26.236 "status": "terminated", 00:20:26.236 "verify_range": { 00:20:26.236 "start": 0, 00:20:26.236 "length": 16384 00:20:26.236 }, 00:20:26.236 "queue_depth": 128, 00:20:26.236 "io_size": 4096, 00:20:26.236 "runtime": 32.936472, 00:20:26.236 "iops": 9310.13497741956, 00:20:26.236 "mibps": 36.36771475554516, 00:20:26.236 "io_failed": 0, 00:20:26.236 "io_timeout": 0, 00:20:26.236 "avg_latency_us": 13720.629036466155, 00:20:26.236 "min_latency_us": 309.0618181818182, 00:20:26.236 "max_latency_us": 4026531.84 00:20:26.236 } 00:20:26.236 ], 00:20:26.236 "core_count": 1 00:20:26.236 } 00:20:26.500 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 92077 00:20:26.500 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:26.500 [2024-11-19 16:14:57.985948] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:20:26.500 [2024-11-19 16:14:57.986047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92077 ] 00:20:26.500 [2024-11-19 16:14:58.133299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.500 [2024-11-19 16:14:58.156619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.500 [2024-11-19 16:14:58.189976] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:26.500 Running I/O for 90 seconds... 00:20:26.500 7460.00 IOPS, 29.14 MiB/s [2024-11-19T16:15:33.215Z] 7314.00 IOPS, 28.57 MiB/s [2024-11-19T16:15:33.215Z] 7217.33 IOPS, 28.19 MiB/s [2024-11-19T16:15:33.215Z] 7237.25 IOPS, 28.27 MiB/s [2024-11-19T16:15:33.215Z] 7300.00 IOPS, 28.52 MiB/s [2024-11-19T16:15:33.215Z] 7671.83 IOPS, 29.97 MiB/s [2024-11-19T16:15:33.215Z] 8060.43 IOPS, 31.49 MiB/s [2024-11-19T16:15:33.215Z] 8344.25 IOPS, 32.59 MiB/s [2024-11-19T16:15:33.215Z] 8603.33 IOPS, 33.61 MiB/s [2024-11-19T16:15:33.215Z] 8791.70 IOPS, 34.34 MiB/s [2024-11-19T16:15:33.215Z] 8877.55 IOPS, 34.68 MiB/s [2024-11-19T16:15:33.215Z] 9023.08 IOPS, 35.25 MiB/s [2024-11-19T16:15:33.215Z] 9151.77 IOPS, 35.75 MiB/s [2024-11-19T16:15:33.215Z] 9250.57 IOPS, 36.14 MiB/s [2024-11-19T16:15:33.215Z] [2024-11-19 16:15:14.352778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.352837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.352905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.352925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.352948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.352962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.352983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.352997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.353016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.353030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.353049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:25288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.353063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.353082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.353096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.353115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.353128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.353147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.353161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.353205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.353220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.353240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.353266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.353288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:25336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.500 [2024-11-19 16:15:14.353302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.500 [2024-11-19 16:15:14.353321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:25368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:24736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.353466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.353499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.353532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.353565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.353597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:24776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.353642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.353676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:24792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.353710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:25416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:25424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.353968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.353982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.354015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:25448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.354056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.354092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.354125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.354159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.354193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.354226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:25496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.501 [2024-11-19 16:15:14.354289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.354323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.354357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.354390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:24824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.354424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.354458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.354499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.354536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.354571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.354604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.354653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.501 [2024-11-19 16:15:14.354672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.501 [2024-11-19 16:15:14.354686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.354705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.354718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.354737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.354751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.354781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.354796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.354816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.354830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.354849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.354863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.354882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.354896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.354915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.354929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.354955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.354970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.354990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.502 [2024-11-19 16:15:14.355192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.502 [2024-11-19 16:15:14.355228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.502 [2024-11-19 16:15:14.355293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.502 [2024-11-19 16:15:14.355328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.502 [2024-11-19 16:15:14.355377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:25544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.502 [2024-11-19 16:15:14.355413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.502 [2024-11-19 16:15:14.355460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.502 [2024-11-19 16:15:14.355495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.355980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.502 [2024-11-19 16:15:14.355999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.502 [2024-11-19 16:15:14.356012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.356112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.356145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.356178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.356211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.356244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.356277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.356334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.356387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:25160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.356905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.356919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.503 [2024-11-19 16:15:14.361376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:25688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:25696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.503 [2024-11-19 16:15:14.361903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.503 [2024-11-19 16:15:14.361917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:14.361942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:14.361956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:14.361981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:14.361995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:14.362020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:14.362034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:14.362063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:14.362078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.504 8944.27 IOPS, 34.94 MiB/s [2024-11-19T16:15:33.219Z] 8385.25 IOPS, 32.75 MiB/s [2024-11-19T16:15:33.219Z] 7892.00 IOPS, 30.83 MiB/s [2024-11-19T16:15:33.219Z] 7453.56 IOPS, 29.12 MiB/s [2024-11-19T16:15:33.219Z] 7378.58 IOPS, 28.82 MiB/s [2024-11-19T16:15:33.219Z] 7526.10 IOPS, 29.40 MiB/s [2024-11-19T16:15:33.219Z] 7669.76 IOPS, 29.96 MiB/s [2024-11-19T16:15:33.219Z] 7965.73 IOPS, 31.12 MiB/s [2024-11-19T16:15:33.219Z] 8228.39 IOPS, 32.14 MiB/s [2024-11-19T16:15:33.219Z] 8429.54 IOPS, 32.93 MiB/s [2024-11-19T16:15:33.219Z] 8521.24 IOPS, 33.29 MiB/s [2024-11-19T16:15:33.219Z] 8595.38 IOPS, 33.58 MiB/s [2024-11-19T16:15:33.219Z] 8661.89 IOPS, 33.84 MiB/s [2024-11-19T16:15:33.219Z] 8811.18 IOPS, 34.42 MiB/s [2024-11-19T16:15:33.219Z] 8985.03 IOPS, 35.10 MiB/s [2024-11-19T16:15:33.219Z] 9150.77 IOPS, 35.75 MiB/s [2024-11-19T16:15:33.219Z] [2024-11-19 16:15:30.278675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.504 [2024-11-19 16:15:30.278759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.278831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:7480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.278878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.278903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.278919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.278940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.278955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.278976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.278991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:7544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.504 [2024-11-19 16:15:30.279344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:7704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.279816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.504 [2024-11-19 16:15:30.279850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.504 [2024-11-19 16:15:30.279885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.279906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.504 [2024-11-19 16:15:30.279920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.283605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.504 [2024-11-19 16:15:30.283635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.504 [2024-11-19 16:15:30.283662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.504 [2024-11-19 16:15:30.283679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.283699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.283714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.283733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.283748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.283767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.283782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.283801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.283816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.283835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.283850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.283869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.283883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.283903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.283930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.283952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.283966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.283986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.284000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.284019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.284034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.284054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.284068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.284088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.284102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.284122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.284136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.284157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.284171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.505 [2024-11-19 16:15:30.284190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.505 [2024-11-19 16:15:30.284205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.505 9237.13 IOPS, 36.08 MiB/s [2024-11-19T16:15:33.220Z] 9279.84 IOPS, 36.25 MiB/s [2024-11-19T16:15:33.220Z] Received shutdown signal, test time was about 32.937220 seconds 00:20:26.505 00:20:26.505 Latency(us) 00:20:26.505 [2024-11-19T16:15:33.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.505 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:26.505 Verification LBA range: start 0x0 length 0x4000 00:20:26.505 Nvme0n1 : 32.94 9310.13 36.37 0.00 0.00 13720.63 309.06 4026531.84 00:20:26.505 [2024-11-19T16:15:33.220Z] =================================================================================================================== 00:20:26.505 [2024-11-19T16:15:33.220Z] Total : 9310.13 36.37 0.00 0.00 13720.63 309.06 4026531.84 00:20:26.505 16:15:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:26.505 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:26.505 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:26.505 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:26.505 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:26.505 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.765 rmmod nvme_tcp 00:20:26.765 rmmod nvme_fabrics 00:20:26.765 rmmod nvme_keyring 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 92029 ']' 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 92029 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 92029 ']' 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 92029 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92029 00:20:26.765 killing process with pid 92029 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92029' 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 92029 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 92029 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:26.765 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:27.025 ************************************ 00:20:27.025 END TEST nvmf_host_multipath_status 00:20:27.025 ************************************ 00:20:27.025 00:20:27.025 real 0m38.275s 00:20:27.025 user 2m3.924s 00:20:27.025 sys 0m11.060s 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.025 ************************************ 00:20:27.025 START TEST nvmf_discovery_remove_ifc 00:20:27.025 ************************************ 00:20:27.025 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:27.286 * Looking for test storage... 00:20:27.286 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.286 --rc genhtml_branch_coverage=1 00:20:27.286 --rc genhtml_function_coverage=1 00:20:27.286 --rc genhtml_legend=1 00:20:27.286 --rc geninfo_all_blocks=1 00:20:27.286 --rc geninfo_unexecuted_blocks=1 00:20:27.286 00:20:27.286 ' 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.286 --rc genhtml_branch_coverage=1 00:20:27.286 --rc genhtml_function_coverage=1 00:20:27.286 --rc genhtml_legend=1 00:20:27.286 --rc geninfo_all_blocks=1 00:20:27.286 --rc geninfo_unexecuted_blocks=1 00:20:27.286 00:20:27.286 ' 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.286 --rc genhtml_branch_coverage=1 00:20:27.286 --rc genhtml_function_coverage=1 00:20:27.286 --rc genhtml_legend=1 00:20:27.286 --rc geninfo_all_blocks=1 00:20:27.286 --rc geninfo_unexecuted_blocks=1 00:20:27.286 00:20:27.286 ' 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:27.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.286 --rc genhtml_branch_coverage=1 00:20:27.286 --rc genhtml_function_coverage=1 00:20:27.286 --rc genhtml_legend=1 00:20:27.286 --rc geninfo_all_blocks=1 00:20:27.286 --rc geninfo_unexecuted_blocks=1 00:20:27.286 00:20:27.286 ' 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:20:27.286 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:27.287 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:27.287 Cannot find device "nvmf_init_br" 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:27.287 Cannot find device "nvmf_init_br2" 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:27.287 Cannot find device "nvmf_tgt_br" 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:27.287 16:15:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:27.547 Cannot find device "nvmf_tgt_br2" 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:27.547 Cannot find device "nvmf_init_br" 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:27.547 Cannot find device "nvmf_init_br2" 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:27.547 Cannot find device "nvmf_tgt_br" 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:27.547 Cannot find device "nvmf_tgt_br2" 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:27.547 Cannot find device "nvmf_br" 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:27.547 Cannot find device "nvmf_init_if" 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:27.547 Cannot find device "nvmf_init_if2" 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:27.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:27.547 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:27.547 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:27.807 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:27.807 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:20:27.807 00:20:27.807 --- 10.0.0.3 ping statistics --- 00:20:27.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.807 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:27.807 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:27.807 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:20:27.807 00:20:27.807 --- 10.0.0.4 ping statistics --- 00:20:27.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.807 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:27.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:20:27.807 00:20:27.807 --- 10.0.0.1 ping statistics --- 00:20:27.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.807 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:27.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:20:27.807 00:20:27.807 --- 10.0.0.2 ping statistics --- 00:20:27.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.807 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=92897 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 92897 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 92897 ']' 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:27.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:27.807 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.807 [2024-11-19 16:15:34.414016] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:20:27.807 [2024-11-19 16:15:34.414101] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.067 [2024-11-19 16:15:34.563280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.067 [2024-11-19 16:15:34.581026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.067 [2024-11-19 16:15:34.581089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.067 [2024-11-19 16:15:34.581098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.067 [2024-11-19 16:15:34.581105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.067 [2024-11-19 16:15:34.581111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.067 [2024-11-19 16:15:34.581381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.067 [2024-11-19 16:15:34.608361] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:28.067 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.067 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:28.067 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:28.067 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.067 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.067 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.067 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:28.067 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.067 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.067 [2024-11-19 16:15:34.718933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.067 [2024-11-19 16:15:34.727028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:28.067 null0 00:20:28.067 [2024-11-19 16:15:34.758970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:28.067 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.327 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=92926 00:20:28.327 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:28.327 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 92926 /tmp/host.sock 00:20:28.327 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 92926 ']' 00:20:28.327 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:20:28.327 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.327 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:28.327 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:28.327 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.327 16:15:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.327 [2024-11-19 16:15:34.844368] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:20:28.327 [2024-11-19 16:15:34.844464] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92926 ] 00:20:28.327 [2024-11-19 16:15:34.999226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.327 [2024-11-19 16:15:35.023221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.587 [2024-11-19 16:15:35.140795] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.587 16:15:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:29.524 [2024-11-19 16:15:36.181748] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:29.524 [2024-11-19 16:15:36.181773] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:29.524 [2024-11-19 16:15:36.181806] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:29.524 [2024-11-19 16:15:36.187812] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:29.783 [2024-11-19 16:15:36.242175] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:20:29.783 [2024-11-19 16:15:36.243143] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x6c5000:1 started. 00:20:29.783 [2024-11-19 16:15:36.244683] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:29.783 [2024-11-19 16:15:36.244752] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:29.783 [2024-11-19 16:15:36.244777] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:29.783 [2024-11-19 16:15:36.244792] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:29.783 [2024-11-19 16:15:36.244812] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.783 [2024-11-19 16:15:36.250538] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x6c5000 was disconnected and freed. delete nvme_qpair. 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:29.783 16:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:30.720 16:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:30.720 16:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:30.720 16:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:30.720 16:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.720 16:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:30.720 16:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:30.720 16:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:30.720 16:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.720 16:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:30.720 16:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:32.097 16:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:32.097 16:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:32.097 16:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:32.097 16:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.097 16:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:32.097 16:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:32.097 16:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:32.097 16:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.097 16:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:32.097 16:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:33.032 16:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:33.032 16:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.032 16:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:33.032 16:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.032 16:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.032 16:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:33.032 16:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:33.032 16:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.032 16:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:33.032 16:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:33.969 16:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:33.969 16:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.969 16:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:33.969 16:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.969 16:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:33.969 16:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.969 16:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:33.969 16:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.969 16:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:33.969 16:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:35.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:35.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:35.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:35.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:35.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:35.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:35.346 16:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:35.346 [2024-11-19 16:15:41.684187] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:35.346 [2024-11-19 16:15:41.684294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.346 [2024-11-19 16:15:41.684312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.346 [2024-11-19 16:15:41.684325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.346 [2024-11-19 16:15:41.684334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.346 [2024-11-19 16:15:41.684344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.346 [2024-11-19 16:15:41.684353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.346 [2024-11-19 16:15:41.684362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.346 [2024-11-19 16:15:41.684370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.346 [2024-11-19 16:15:41.684380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.346 [2024-11-19 16:15:41.684389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.346 [2024-11-19 16:15:41.684399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a0980 is same with the state(6) to be set 00:20:35.346 [2024-11-19 16:15:41.694188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a0980 (9): Bad file descriptor 00:20:35.346 [2024-11-19 16:15:41.704198] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:20:35.346 [2024-11-19 16:15:41.704234] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:20:35.346 [2024-11-19 16:15:41.704240] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:35.346 [2024-11-19 16:15:41.704246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:35.346 [2024-11-19 16:15:41.704302] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:36.284 16:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:36.284 16:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.284 16:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:36.284 16:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:36.284 16:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.284 16:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:36.284 16:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:36.284 [2024-11-19 16:15:42.744360] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:36.284 [2024-11-19 16:15:42.744486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a0980 with addr=10.0.0.3, port=4420 00:20:36.284 [2024-11-19 16:15:42.744520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6a0980 is same with the state(6) to be set 00:20:36.284 [2024-11-19 16:15:42.744578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6a0980 (9): Bad file descriptor 00:20:36.284 [2024-11-19 16:15:42.745478] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:20:36.284 [2024-11-19 16:15:42.745581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:36.284 [2024-11-19 16:15:42.745607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:36.284 [2024-11-19 16:15:42.745630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:36.284 [2024-11-19 16:15:42.745650] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:36.284 [2024-11-19 16:15:42.745665] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:36.284 [2024-11-19 16:15:42.745677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:36.284 [2024-11-19 16:15:42.745698] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:20:36.284 [2024-11-19 16:15:42.745711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:20:36.284 16:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.284 16:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:36.284 16:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:37.220 [2024-11-19 16:15:43.745786] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:20:37.220 [2024-11-19 16:15:43.745834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:20:37.220 [2024-11-19 16:15:43.745856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:20:37.220 [2024-11-19 16:15:43.745882] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:20:37.220 [2024-11-19 16:15:43.745891] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:20:37.220 [2024-11-19 16:15:43.745899] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:20:37.220 [2024-11-19 16:15:43.745905] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:20:37.220 [2024-11-19 16:15:43.745910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:20:37.220 [2024-11-19 16:15:43.745940] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:37.220 [2024-11-19 16:15:43.745974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.220 [2024-11-19 16:15:43.745988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.220 [2024-11-19 16:15:43.746000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.220 [2024-11-19 16:15:43.746008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.220 [2024-11-19 16:15:43.746017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.221 [2024-11-19 16:15:43.746025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.221 [2024-11-19 16:15:43.746034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.221 [2024-11-19 16:15:43.746041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.221 [2024-11-19 16:15:43.746050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.221 [2024-11-19 16:15:43.746058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.221 [2024-11-19 16:15:43.746066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:20:37.221 [2024-11-19 16:15:43.746749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68ef00 (9): Bad file descriptor 00:20:37.221 [2024-11-19 16:15:43.747736] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:37.221 [2024-11-19 16:15:43.747761] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:37.221 16:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:38.651 16:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:38.651 16:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:38.651 16:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.652 16:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.652 16:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:38.652 16:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:38.652 16:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:38.652 16:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.652 16:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:38.652 16:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:39.273 [2024-11-19 16:15:45.755826] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:39.273 [2024-11-19 16:15:45.755851] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:39.273 [2024-11-19 16:15:45.755886] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:39.273 [2024-11-19 16:15:45.761860] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:39.273 [2024-11-19 16:15:45.816183] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:20:39.273 [2024-11-19 16:15:45.816879] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x67bab0:1 started. 00:20:39.273 [2024-11-19 16:15:45.818012] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:39.273 [2024-11-19 16:15:45.818067] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:39.273 [2024-11-19 16:15:45.818087] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:39.273 [2024-11-19 16:15:45.818102] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:39.273 [2024-11-19 16:15:45.818109] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:39.273 [2024-11-19 16:15:45.824344] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x67bab0 was disconnected and freed. delete nvme_qpair. 00:20:39.273 16:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:39.273 16:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:39.273 16:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:39.273 16:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:39.273 16:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.273 16:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:39.273 16:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:39.531 16:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 92926 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 92926 ']' 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 92926 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92926 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.531 killing process with pid 92926 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92926' 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 92926 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 92926 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.531 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.531 rmmod nvme_tcp 00:20:39.531 rmmod nvme_fabrics 00:20:39.790 rmmod nvme_keyring 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 92897 ']' 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 92897 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 92897 ']' 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 92897 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 92897 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:39.790 killing process with pid 92897 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 92897' 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 92897 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 92897 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:39.790 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:40.049 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:40.049 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:40.050 00:20:40.050 real 0m12.931s 00:20:40.050 user 0m22.163s 00:20:40.050 sys 0m2.334s 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:40.050 ************************************ 00:20:40.050 END TEST nvmf_discovery_remove_ifc 00:20:40.050 ************************************ 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.050 ************************************ 00:20:40.050 START TEST nvmf_identify_kernel_target 00:20:40.050 ************************************ 00:20:40.050 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:40.310 * Looking for test storage... 00:20:40.310 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:40.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.310 --rc genhtml_branch_coverage=1 00:20:40.310 --rc genhtml_function_coverage=1 00:20:40.310 --rc genhtml_legend=1 00:20:40.310 --rc geninfo_all_blocks=1 00:20:40.310 --rc geninfo_unexecuted_blocks=1 00:20:40.310 00:20:40.310 ' 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:40.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.310 --rc genhtml_branch_coverage=1 00:20:40.310 --rc genhtml_function_coverage=1 00:20:40.310 --rc genhtml_legend=1 00:20:40.310 --rc geninfo_all_blocks=1 00:20:40.310 --rc geninfo_unexecuted_blocks=1 00:20:40.310 00:20:40.310 ' 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:40.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.310 --rc genhtml_branch_coverage=1 00:20:40.310 --rc genhtml_function_coverage=1 00:20:40.310 --rc genhtml_legend=1 00:20:40.310 --rc geninfo_all_blocks=1 00:20:40.310 --rc geninfo_unexecuted_blocks=1 00:20:40.310 00:20:40.310 ' 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:40.310 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.310 --rc genhtml_branch_coverage=1 00:20:40.310 --rc genhtml_function_coverage=1 00:20:40.310 --rc genhtml_legend=1 00:20:40.310 --rc geninfo_all_blocks=1 00:20:40.310 --rc geninfo_unexecuted_blocks=1 00:20:40.310 00:20:40.310 ' 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.310 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.311 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:40.311 Cannot find device "nvmf_init_br" 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:40.311 Cannot find device "nvmf_init_br2" 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:40.311 Cannot find device "nvmf_tgt_br" 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.311 Cannot find device "nvmf_tgt_br2" 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:40.311 Cannot find device "nvmf_init_br" 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:40.311 Cannot find device "nvmf_init_br2" 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:40.311 Cannot find device "nvmf_tgt_br" 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:40.311 16:15:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:40.311 Cannot find device "nvmf_tgt_br2" 00:20:40.311 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:40.311 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:40.311 Cannot find device "nvmf_br" 00:20:40.311 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:40.311 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:40.571 Cannot find device "nvmf_init_if" 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:40.571 Cannot find device "nvmf_init_if2" 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:40.571 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.571 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:20:40.571 00:20:40.571 --- 10.0.0.3 ping statistics --- 00:20:40.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.571 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:40.571 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:40.571 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.045 ms 00:20:40.571 00:20:40.571 --- 10.0.0.4 ping statistics --- 00:20:40.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.571 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:40.571 00:20:40.571 --- 10.0.0.1 ping statistics --- 00:20:40.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.571 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:40.571 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:40.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:20:40.830 00:20:40.830 --- 10.0.0.2 ping statistics --- 00:20:40.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.830 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:40.830 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:41.089 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:41.089 Waiting for block devices as requested 00:20:41.089 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:41.349 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:41.349 No valid GPT data, bailing 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:41.349 16:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:41.349 No valid GPT data, bailing 00:20:41.349 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:41.608 No valid GPT data, bailing 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:41.608 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:41.609 No valid GPT data, bailing 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -a 10.0.0.1 -t tcp -s 4420 00:20:41.609 00:20:41.609 Discovery Log Number of Records 2, Generation counter 2 00:20:41.609 =====Discovery Log Entry 0====== 00:20:41.609 trtype: tcp 00:20:41.609 adrfam: ipv4 00:20:41.609 subtype: current discovery subsystem 00:20:41.609 treq: not specified, sq flow control disable supported 00:20:41.609 portid: 1 00:20:41.609 trsvcid: 4420 00:20:41.609 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:41.609 traddr: 10.0.0.1 00:20:41.609 eflags: none 00:20:41.609 sectype: none 00:20:41.609 =====Discovery Log Entry 1====== 00:20:41.609 trtype: tcp 00:20:41.609 adrfam: ipv4 00:20:41.609 subtype: nvme subsystem 00:20:41.609 treq: not specified, sq flow control disable supported 00:20:41.609 portid: 1 00:20:41.609 trsvcid: 4420 00:20:41.609 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:41.609 traddr: 10.0.0.1 00:20:41.609 eflags: none 00:20:41.609 sectype: none 00:20:41.609 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:41.609 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:41.868 ===================================================== 00:20:41.868 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:41.868 ===================================================== 00:20:41.868 Controller Capabilities/Features 00:20:41.868 ================================ 00:20:41.868 Vendor ID: 0000 00:20:41.868 Subsystem Vendor ID: 0000 00:20:41.868 Serial Number: f7c07fe1b14710e9e646 00:20:41.868 Model Number: Linux 00:20:41.868 Firmware Version: 6.8.9-20 00:20:41.868 Recommended Arb Burst: 0 00:20:41.868 IEEE OUI Identifier: 00 00 00 00:20:41.868 Multi-path I/O 00:20:41.868 May have multiple subsystem ports: No 00:20:41.868 May have multiple controllers: No 00:20:41.868 Associated with SR-IOV VF: No 00:20:41.868 Max Data Transfer Size: Unlimited 00:20:41.868 Max Number of Namespaces: 0 00:20:41.868 Max Number of I/O Queues: 1024 00:20:41.868 NVMe Specification Version (VS): 1.3 00:20:41.868 NVMe Specification Version (Identify): 1.3 00:20:41.868 Maximum Queue Entries: 1024 00:20:41.868 Contiguous Queues Required: No 00:20:41.868 Arbitration Mechanisms Supported 00:20:41.868 Weighted Round Robin: Not Supported 00:20:41.868 Vendor Specific: Not Supported 00:20:41.868 Reset Timeout: 7500 ms 00:20:41.868 Doorbell Stride: 4 bytes 00:20:41.868 NVM Subsystem Reset: Not Supported 00:20:41.868 Command Sets Supported 00:20:41.868 NVM Command Set: Supported 00:20:41.868 Boot Partition: Not Supported 00:20:41.868 Memory Page Size Minimum: 4096 bytes 00:20:41.868 Memory Page Size Maximum: 4096 bytes 00:20:41.868 Persistent Memory Region: Not Supported 00:20:41.868 Optional Asynchronous Events Supported 00:20:41.868 Namespace Attribute Notices: Not Supported 00:20:41.868 Firmware Activation Notices: Not Supported 00:20:41.868 ANA Change Notices: Not Supported 00:20:41.868 PLE Aggregate Log Change Notices: Not Supported 00:20:41.868 LBA Status Info Alert Notices: Not Supported 00:20:41.868 EGE Aggregate Log Change Notices: Not Supported 00:20:41.868 Normal NVM Subsystem Shutdown event: Not Supported 00:20:41.868 Zone Descriptor Change Notices: Not Supported 00:20:41.868 Discovery Log Change Notices: Supported 00:20:41.868 Controller Attributes 00:20:41.868 128-bit Host Identifier: Not Supported 00:20:41.868 Non-Operational Permissive Mode: Not Supported 00:20:41.868 NVM Sets: Not Supported 00:20:41.868 Read Recovery Levels: Not Supported 00:20:41.868 Endurance Groups: Not Supported 00:20:41.868 Predictable Latency Mode: Not Supported 00:20:41.868 Traffic Based Keep ALive: Not Supported 00:20:41.868 Namespace Granularity: Not Supported 00:20:41.868 SQ Associations: Not Supported 00:20:41.868 UUID List: Not Supported 00:20:41.868 Multi-Domain Subsystem: Not Supported 00:20:41.868 Fixed Capacity Management: Not Supported 00:20:41.868 Variable Capacity Management: Not Supported 00:20:41.868 Delete Endurance Group: Not Supported 00:20:41.868 Delete NVM Set: Not Supported 00:20:41.868 Extended LBA Formats Supported: Not Supported 00:20:41.868 Flexible Data Placement Supported: Not Supported 00:20:41.868 00:20:41.868 Controller Memory Buffer Support 00:20:41.868 ================================ 00:20:41.868 Supported: No 00:20:41.868 00:20:41.868 Persistent Memory Region Support 00:20:41.868 ================================ 00:20:41.868 Supported: No 00:20:41.868 00:20:41.868 Admin Command Set Attributes 00:20:41.868 ============================ 00:20:41.868 Security Send/Receive: Not Supported 00:20:41.868 Format NVM: Not Supported 00:20:41.868 Firmware Activate/Download: Not Supported 00:20:41.868 Namespace Management: Not Supported 00:20:41.868 Device Self-Test: Not Supported 00:20:41.868 Directives: Not Supported 00:20:41.868 NVMe-MI: Not Supported 00:20:41.869 Virtualization Management: Not Supported 00:20:41.869 Doorbell Buffer Config: Not Supported 00:20:41.869 Get LBA Status Capability: Not Supported 00:20:41.869 Command & Feature Lockdown Capability: Not Supported 00:20:41.869 Abort Command Limit: 1 00:20:41.869 Async Event Request Limit: 1 00:20:41.869 Number of Firmware Slots: N/A 00:20:41.869 Firmware Slot 1 Read-Only: N/A 00:20:41.869 Firmware Activation Without Reset: N/A 00:20:41.869 Multiple Update Detection Support: N/A 00:20:41.869 Firmware Update Granularity: No Information Provided 00:20:41.869 Per-Namespace SMART Log: No 00:20:41.869 Asymmetric Namespace Access Log Page: Not Supported 00:20:41.869 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:41.869 Command Effects Log Page: Not Supported 00:20:41.869 Get Log Page Extended Data: Supported 00:20:41.869 Telemetry Log Pages: Not Supported 00:20:41.869 Persistent Event Log Pages: Not Supported 00:20:41.869 Supported Log Pages Log Page: May Support 00:20:41.869 Commands Supported & Effects Log Page: Not Supported 00:20:41.869 Feature Identifiers & Effects Log Page:May Support 00:20:41.869 NVMe-MI Commands & Effects Log Page: May Support 00:20:41.869 Data Area 4 for Telemetry Log: Not Supported 00:20:41.869 Error Log Page Entries Supported: 1 00:20:41.869 Keep Alive: Not Supported 00:20:41.869 00:20:41.869 NVM Command Set Attributes 00:20:41.869 ========================== 00:20:41.869 Submission Queue Entry Size 00:20:41.869 Max: 1 00:20:41.869 Min: 1 00:20:41.869 Completion Queue Entry Size 00:20:41.869 Max: 1 00:20:41.869 Min: 1 00:20:41.869 Number of Namespaces: 0 00:20:41.869 Compare Command: Not Supported 00:20:41.869 Write Uncorrectable Command: Not Supported 00:20:41.869 Dataset Management Command: Not Supported 00:20:41.869 Write Zeroes Command: Not Supported 00:20:41.869 Set Features Save Field: Not Supported 00:20:41.869 Reservations: Not Supported 00:20:41.869 Timestamp: Not Supported 00:20:41.869 Copy: Not Supported 00:20:41.869 Volatile Write Cache: Not Present 00:20:41.869 Atomic Write Unit (Normal): 1 00:20:41.869 Atomic Write Unit (PFail): 1 00:20:41.869 Atomic Compare & Write Unit: 1 00:20:41.869 Fused Compare & Write: Not Supported 00:20:41.869 Scatter-Gather List 00:20:41.869 SGL Command Set: Supported 00:20:41.869 SGL Keyed: Not Supported 00:20:41.869 SGL Bit Bucket Descriptor: Not Supported 00:20:41.869 SGL Metadata Pointer: Not Supported 00:20:41.869 Oversized SGL: Not Supported 00:20:41.869 SGL Metadata Address: Not Supported 00:20:41.869 SGL Offset: Supported 00:20:41.869 Transport SGL Data Block: Not Supported 00:20:41.869 Replay Protected Memory Block: Not Supported 00:20:41.869 00:20:41.869 Firmware Slot Information 00:20:41.869 ========================= 00:20:41.869 Active slot: 0 00:20:41.869 00:20:41.869 00:20:41.869 Error Log 00:20:41.869 ========= 00:20:41.869 00:20:41.869 Active Namespaces 00:20:41.869 ================= 00:20:41.869 Discovery Log Page 00:20:41.869 ================== 00:20:41.869 Generation Counter: 2 00:20:41.869 Number of Records: 2 00:20:41.869 Record Format: 0 00:20:41.869 00:20:41.869 Discovery Log Entry 0 00:20:41.869 ---------------------- 00:20:41.869 Transport Type: 3 (TCP) 00:20:41.869 Address Family: 1 (IPv4) 00:20:41.869 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:41.869 Entry Flags: 00:20:41.869 Duplicate Returned Information: 0 00:20:41.869 Explicit Persistent Connection Support for Discovery: 0 00:20:41.869 Transport Requirements: 00:20:41.869 Secure Channel: Not Specified 00:20:41.869 Port ID: 1 (0x0001) 00:20:41.869 Controller ID: 65535 (0xffff) 00:20:41.869 Admin Max SQ Size: 32 00:20:41.869 Transport Service Identifier: 4420 00:20:41.869 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:41.869 Transport Address: 10.0.0.1 00:20:41.869 Discovery Log Entry 1 00:20:41.869 ---------------------- 00:20:41.869 Transport Type: 3 (TCP) 00:20:41.869 Address Family: 1 (IPv4) 00:20:41.869 Subsystem Type: 2 (NVM Subsystem) 00:20:41.869 Entry Flags: 00:20:41.869 Duplicate Returned Information: 0 00:20:41.869 Explicit Persistent Connection Support for Discovery: 0 00:20:41.869 Transport Requirements: 00:20:41.869 Secure Channel: Not Specified 00:20:41.869 Port ID: 1 (0x0001) 00:20:41.869 Controller ID: 65535 (0xffff) 00:20:41.869 Admin Max SQ Size: 32 00:20:41.869 Transport Service Identifier: 4420 00:20:41.869 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:41.869 Transport Address: 10.0.0.1 00:20:41.869 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:42.129 get_feature(0x01) failed 00:20:42.129 get_feature(0x02) failed 00:20:42.129 get_feature(0x04) failed 00:20:42.129 ===================================================== 00:20:42.129 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:42.129 ===================================================== 00:20:42.129 Controller Capabilities/Features 00:20:42.129 ================================ 00:20:42.129 Vendor ID: 0000 00:20:42.129 Subsystem Vendor ID: 0000 00:20:42.129 Serial Number: a36625f603e3f74def70 00:20:42.129 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:42.129 Firmware Version: 6.8.9-20 00:20:42.129 Recommended Arb Burst: 6 00:20:42.129 IEEE OUI Identifier: 00 00 00 00:20:42.129 Multi-path I/O 00:20:42.129 May have multiple subsystem ports: Yes 00:20:42.129 May have multiple controllers: Yes 00:20:42.129 Associated with SR-IOV VF: No 00:20:42.129 Max Data Transfer Size: Unlimited 00:20:42.129 Max Number of Namespaces: 1024 00:20:42.129 Max Number of I/O Queues: 128 00:20:42.129 NVMe Specification Version (VS): 1.3 00:20:42.129 NVMe Specification Version (Identify): 1.3 00:20:42.129 Maximum Queue Entries: 1024 00:20:42.129 Contiguous Queues Required: No 00:20:42.129 Arbitration Mechanisms Supported 00:20:42.129 Weighted Round Robin: Not Supported 00:20:42.129 Vendor Specific: Not Supported 00:20:42.129 Reset Timeout: 7500 ms 00:20:42.129 Doorbell Stride: 4 bytes 00:20:42.129 NVM Subsystem Reset: Not Supported 00:20:42.129 Command Sets Supported 00:20:42.129 NVM Command Set: Supported 00:20:42.129 Boot Partition: Not Supported 00:20:42.129 Memory Page Size Minimum: 4096 bytes 00:20:42.129 Memory Page Size Maximum: 4096 bytes 00:20:42.129 Persistent Memory Region: Not Supported 00:20:42.129 Optional Asynchronous Events Supported 00:20:42.129 Namespace Attribute Notices: Supported 00:20:42.129 Firmware Activation Notices: Not Supported 00:20:42.129 ANA Change Notices: Supported 00:20:42.129 PLE Aggregate Log Change Notices: Not Supported 00:20:42.129 LBA Status Info Alert Notices: Not Supported 00:20:42.129 EGE Aggregate Log Change Notices: Not Supported 00:20:42.129 Normal NVM Subsystem Shutdown event: Not Supported 00:20:42.129 Zone Descriptor Change Notices: Not Supported 00:20:42.129 Discovery Log Change Notices: Not Supported 00:20:42.129 Controller Attributes 00:20:42.129 128-bit Host Identifier: Supported 00:20:42.129 Non-Operational Permissive Mode: Not Supported 00:20:42.129 NVM Sets: Not Supported 00:20:42.129 Read Recovery Levels: Not Supported 00:20:42.129 Endurance Groups: Not Supported 00:20:42.129 Predictable Latency Mode: Not Supported 00:20:42.129 Traffic Based Keep ALive: Supported 00:20:42.129 Namespace Granularity: Not Supported 00:20:42.129 SQ Associations: Not Supported 00:20:42.129 UUID List: Not Supported 00:20:42.129 Multi-Domain Subsystem: Not Supported 00:20:42.129 Fixed Capacity Management: Not Supported 00:20:42.129 Variable Capacity Management: Not Supported 00:20:42.129 Delete Endurance Group: Not Supported 00:20:42.129 Delete NVM Set: Not Supported 00:20:42.129 Extended LBA Formats Supported: Not Supported 00:20:42.129 Flexible Data Placement Supported: Not Supported 00:20:42.129 00:20:42.129 Controller Memory Buffer Support 00:20:42.129 ================================ 00:20:42.129 Supported: No 00:20:42.129 00:20:42.129 Persistent Memory Region Support 00:20:42.129 ================================ 00:20:42.129 Supported: No 00:20:42.129 00:20:42.129 Admin Command Set Attributes 00:20:42.129 ============================ 00:20:42.129 Security Send/Receive: Not Supported 00:20:42.129 Format NVM: Not Supported 00:20:42.129 Firmware Activate/Download: Not Supported 00:20:42.129 Namespace Management: Not Supported 00:20:42.129 Device Self-Test: Not Supported 00:20:42.129 Directives: Not Supported 00:20:42.129 NVMe-MI: Not Supported 00:20:42.129 Virtualization Management: Not Supported 00:20:42.129 Doorbell Buffer Config: Not Supported 00:20:42.129 Get LBA Status Capability: Not Supported 00:20:42.129 Command & Feature Lockdown Capability: Not Supported 00:20:42.129 Abort Command Limit: 4 00:20:42.129 Async Event Request Limit: 4 00:20:42.129 Number of Firmware Slots: N/A 00:20:42.129 Firmware Slot 1 Read-Only: N/A 00:20:42.129 Firmware Activation Without Reset: N/A 00:20:42.129 Multiple Update Detection Support: N/A 00:20:42.129 Firmware Update Granularity: No Information Provided 00:20:42.129 Per-Namespace SMART Log: Yes 00:20:42.129 Asymmetric Namespace Access Log Page: Supported 00:20:42.129 ANA Transition Time : 10 sec 00:20:42.129 00:20:42.129 Asymmetric Namespace Access Capabilities 00:20:42.129 ANA Optimized State : Supported 00:20:42.129 ANA Non-Optimized State : Supported 00:20:42.129 ANA Inaccessible State : Supported 00:20:42.129 ANA Persistent Loss State : Supported 00:20:42.129 ANA Change State : Supported 00:20:42.129 ANAGRPID is not changed : No 00:20:42.129 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:42.129 00:20:42.129 ANA Group Identifier Maximum : 128 00:20:42.129 Number of ANA Group Identifiers : 128 00:20:42.129 Max Number of Allowed Namespaces : 1024 00:20:42.129 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:42.129 Command Effects Log Page: Supported 00:20:42.129 Get Log Page Extended Data: Supported 00:20:42.129 Telemetry Log Pages: Not Supported 00:20:42.129 Persistent Event Log Pages: Not Supported 00:20:42.129 Supported Log Pages Log Page: May Support 00:20:42.129 Commands Supported & Effects Log Page: Not Supported 00:20:42.129 Feature Identifiers & Effects Log Page:May Support 00:20:42.129 NVMe-MI Commands & Effects Log Page: May Support 00:20:42.129 Data Area 4 for Telemetry Log: Not Supported 00:20:42.129 Error Log Page Entries Supported: 128 00:20:42.129 Keep Alive: Supported 00:20:42.129 Keep Alive Granularity: 1000 ms 00:20:42.129 00:20:42.129 NVM Command Set Attributes 00:20:42.129 ========================== 00:20:42.129 Submission Queue Entry Size 00:20:42.129 Max: 64 00:20:42.129 Min: 64 00:20:42.129 Completion Queue Entry Size 00:20:42.129 Max: 16 00:20:42.129 Min: 16 00:20:42.129 Number of Namespaces: 1024 00:20:42.129 Compare Command: Not Supported 00:20:42.129 Write Uncorrectable Command: Not Supported 00:20:42.129 Dataset Management Command: Supported 00:20:42.129 Write Zeroes Command: Supported 00:20:42.129 Set Features Save Field: Not Supported 00:20:42.129 Reservations: Not Supported 00:20:42.129 Timestamp: Not Supported 00:20:42.129 Copy: Not Supported 00:20:42.129 Volatile Write Cache: Present 00:20:42.129 Atomic Write Unit (Normal): 1 00:20:42.129 Atomic Write Unit (PFail): 1 00:20:42.129 Atomic Compare & Write Unit: 1 00:20:42.129 Fused Compare & Write: Not Supported 00:20:42.130 Scatter-Gather List 00:20:42.130 SGL Command Set: Supported 00:20:42.130 SGL Keyed: Not Supported 00:20:42.130 SGL Bit Bucket Descriptor: Not Supported 00:20:42.130 SGL Metadata Pointer: Not Supported 00:20:42.130 Oversized SGL: Not Supported 00:20:42.130 SGL Metadata Address: Not Supported 00:20:42.130 SGL Offset: Supported 00:20:42.130 Transport SGL Data Block: Not Supported 00:20:42.130 Replay Protected Memory Block: Not Supported 00:20:42.130 00:20:42.130 Firmware Slot Information 00:20:42.130 ========================= 00:20:42.130 Active slot: 0 00:20:42.130 00:20:42.130 Asymmetric Namespace Access 00:20:42.130 =========================== 00:20:42.130 Change Count : 0 00:20:42.130 Number of ANA Group Descriptors : 1 00:20:42.130 ANA Group Descriptor : 0 00:20:42.130 ANA Group ID : 1 00:20:42.130 Number of NSID Values : 1 00:20:42.130 Change Count : 0 00:20:42.130 ANA State : 1 00:20:42.130 Namespace Identifier : 1 00:20:42.130 00:20:42.130 Commands Supported and Effects 00:20:42.130 ============================== 00:20:42.130 Admin Commands 00:20:42.130 -------------- 00:20:42.130 Get Log Page (02h): Supported 00:20:42.130 Identify (06h): Supported 00:20:42.130 Abort (08h): Supported 00:20:42.130 Set Features (09h): Supported 00:20:42.130 Get Features (0Ah): Supported 00:20:42.130 Asynchronous Event Request (0Ch): Supported 00:20:42.130 Keep Alive (18h): Supported 00:20:42.130 I/O Commands 00:20:42.130 ------------ 00:20:42.130 Flush (00h): Supported 00:20:42.130 Write (01h): Supported LBA-Change 00:20:42.130 Read (02h): Supported 00:20:42.130 Write Zeroes (08h): Supported LBA-Change 00:20:42.130 Dataset Management (09h): Supported 00:20:42.130 00:20:42.130 Error Log 00:20:42.130 ========= 00:20:42.130 Entry: 0 00:20:42.130 Error Count: 0x3 00:20:42.130 Submission Queue Id: 0x0 00:20:42.130 Command Id: 0x5 00:20:42.130 Phase Bit: 0 00:20:42.130 Status Code: 0x2 00:20:42.130 Status Code Type: 0x0 00:20:42.130 Do Not Retry: 1 00:20:42.130 Error Location: 0x28 00:20:42.130 LBA: 0x0 00:20:42.130 Namespace: 0x0 00:20:42.130 Vendor Log Page: 0x0 00:20:42.130 ----------- 00:20:42.130 Entry: 1 00:20:42.130 Error Count: 0x2 00:20:42.130 Submission Queue Id: 0x0 00:20:42.130 Command Id: 0x5 00:20:42.130 Phase Bit: 0 00:20:42.130 Status Code: 0x2 00:20:42.130 Status Code Type: 0x0 00:20:42.130 Do Not Retry: 1 00:20:42.130 Error Location: 0x28 00:20:42.130 LBA: 0x0 00:20:42.130 Namespace: 0x0 00:20:42.130 Vendor Log Page: 0x0 00:20:42.130 ----------- 00:20:42.130 Entry: 2 00:20:42.130 Error Count: 0x1 00:20:42.130 Submission Queue Id: 0x0 00:20:42.130 Command Id: 0x4 00:20:42.130 Phase Bit: 0 00:20:42.130 Status Code: 0x2 00:20:42.130 Status Code Type: 0x0 00:20:42.130 Do Not Retry: 1 00:20:42.130 Error Location: 0x28 00:20:42.130 LBA: 0x0 00:20:42.130 Namespace: 0x0 00:20:42.130 Vendor Log Page: 0x0 00:20:42.130 00:20:42.130 Number of Queues 00:20:42.130 ================ 00:20:42.130 Number of I/O Submission Queues: 128 00:20:42.130 Number of I/O Completion Queues: 128 00:20:42.130 00:20:42.130 ZNS Specific Controller Data 00:20:42.130 ============================ 00:20:42.130 Zone Append Size Limit: 0 00:20:42.130 00:20:42.130 00:20:42.130 Active Namespaces 00:20:42.130 ================= 00:20:42.130 get_feature(0x05) failed 00:20:42.130 Namespace ID:1 00:20:42.130 Command Set Identifier: NVM (00h) 00:20:42.130 Deallocate: Supported 00:20:42.130 Deallocated/Unwritten Error: Not Supported 00:20:42.130 Deallocated Read Value: Unknown 00:20:42.130 Deallocate in Write Zeroes: Not Supported 00:20:42.130 Deallocated Guard Field: 0xFFFF 00:20:42.130 Flush: Supported 00:20:42.130 Reservation: Not Supported 00:20:42.130 Namespace Sharing Capabilities: Multiple Controllers 00:20:42.130 Size (in LBAs): 1310720 (5GiB) 00:20:42.130 Capacity (in LBAs): 1310720 (5GiB) 00:20:42.130 Utilization (in LBAs): 1310720 (5GiB) 00:20:42.130 UUID: def35387-bcfc-4edc-8198-10c5f52f40c2 00:20:42.130 Thin Provisioning: Not Supported 00:20:42.130 Per-NS Atomic Units: Yes 00:20:42.130 Atomic Boundary Size (Normal): 0 00:20:42.130 Atomic Boundary Size (PFail): 0 00:20:42.130 Atomic Boundary Offset: 0 00:20:42.130 NGUID/EUI64 Never Reused: No 00:20:42.130 ANA group ID: 1 00:20:42.130 Namespace Write Protected: No 00:20:42.130 Number of LBA Formats: 1 00:20:42.130 Current LBA Format: LBA Format #00 00:20:42.130 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:42.130 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:42.130 rmmod nvme_tcp 00:20:42.130 rmmod nvme_fabrics 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:42.130 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:42.390 16:15:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:20:42.390 16:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:42.390 16:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:42.390 16:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:42.390 16:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:42.390 16:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:42.390 16:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:42.390 16:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:43.324 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:43.324 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:43.324 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:43.324 00:20:43.324 real 0m3.211s 00:20:43.324 user 0m1.137s 00:20:43.324 sys 0m1.476s 00:20:43.324 16:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.324 ************************************ 00:20:43.324 END TEST nvmf_identify_kernel_target 00:20:43.324 ************************************ 00:20:43.324 16:15:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.324 16:15:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:43.324 16:15:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:43.324 16:15:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.324 16:15:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.324 ************************************ 00:20:43.324 START TEST nvmf_auth_host 00:20:43.324 ************************************ 00:20:43.324 16:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:43.584 * Looking for test storage... 00:20:43.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:43.584 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:43.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.585 --rc genhtml_branch_coverage=1 00:20:43.585 --rc genhtml_function_coverage=1 00:20:43.585 --rc genhtml_legend=1 00:20:43.585 --rc geninfo_all_blocks=1 00:20:43.585 --rc geninfo_unexecuted_blocks=1 00:20:43.585 00:20:43.585 ' 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:43.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.585 --rc genhtml_branch_coverage=1 00:20:43.585 --rc genhtml_function_coverage=1 00:20:43.585 --rc genhtml_legend=1 00:20:43.585 --rc geninfo_all_blocks=1 00:20:43.585 --rc geninfo_unexecuted_blocks=1 00:20:43.585 00:20:43.585 ' 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:43.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.585 --rc genhtml_branch_coverage=1 00:20:43.585 --rc genhtml_function_coverage=1 00:20:43.585 --rc genhtml_legend=1 00:20:43.585 --rc geninfo_all_blocks=1 00:20:43.585 --rc geninfo_unexecuted_blocks=1 00:20:43.585 00:20:43.585 ' 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:43.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.585 --rc genhtml_branch_coverage=1 00:20:43.585 --rc genhtml_function_coverage=1 00:20:43.585 --rc genhtml_legend=1 00:20:43.585 --rc geninfo_all_blocks=1 00:20:43.585 --rc geninfo_unexecuted_blocks=1 00:20:43.585 00:20:43.585 ' 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:43.585 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:43.585 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:43.586 Cannot find device "nvmf_init_br" 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:43.586 Cannot find device "nvmf_init_br2" 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:43.586 Cannot find device "nvmf_tgt_br" 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.586 Cannot find device "nvmf_tgt_br2" 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:43.586 Cannot find device "nvmf_init_br" 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:43.586 Cannot find device "nvmf_init_br2" 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:43.586 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:43.586 Cannot find device "nvmf_tgt_br" 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:43.845 Cannot find device "nvmf_tgt_br2" 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:43.845 Cannot find device "nvmf_br" 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:43.845 Cannot find device "nvmf_init_if" 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:43.845 Cannot find device "nvmf_init_if2" 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:43.845 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:43.845 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:43.845 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:43.846 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:44.105 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:44.105 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:20:44.105 00:20:44.105 --- 10.0.0.3 ping statistics --- 00:20:44.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.105 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:44.105 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:44.105 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:20:44.105 00:20:44.105 --- 10.0.0.4 ping statistics --- 00:20:44.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.105 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:44.105 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.105 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:44.105 00:20:44.105 --- 10.0.0.1 ping statistics --- 00:20:44.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.105 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:44.105 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.105 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:20:44.105 00:20:44.105 --- 10.0.0.2 ping statistics --- 00:20:44.105 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.105 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=93913 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 93913 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 93913 ']' 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.105 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.363 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.363 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:44.363 16:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fbe03804281f7995dcbc4a8feaf60dde 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Gvm 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fbe03804281f7995dcbc4a8feaf60dde 0 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fbe03804281f7995dcbc4a8feaf60dde 0 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fbe03804281f7995dcbc4a8feaf60dde 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:44.363 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:44.622 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Gvm 00:20:44.622 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Gvm 00:20:44.622 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Gvm 00:20:44.622 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:44.622 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:44.622 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6cc5b68951c21b20461996803a1ec85dd33bdae7690d73b5c164e0341425b69f 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kVk 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6cc5b68951c21b20461996803a1ec85dd33bdae7690d73b5c164e0341425b69f 3 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6cc5b68951c21b20461996803a1ec85dd33bdae7690d73b5c164e0341425b69f 3 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6cc5b68951c21b20461996803a1ec85dd33bdae7690d73b5c164e0341425b69f 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kVk 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kVk 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.kVk 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=703a0f90ffa1e3138e4cf33b27eb816caad34aca1bf3dd74 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.EjR 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 703a0f90ffa1e3138e4cf33b27eb816caad34aca1bf3dd74 0 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 703a0f90ffa1e3138e4cf33b27eb816caad34aca1bf3dd74 0 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=703a0f90ffa1e3138e4cf33b27eb816caad34aca1bf3dd74 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.EjR 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.EjR 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.EjR 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f4423cbc34ab06a0b82de6325f2509cc7a1c890c00c5868a 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.9AN 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f4423cbc34ab06a0b82de6325f2509cc7a1c890c00c5868a 2 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f4423cbc34ab06a0b82de6325f2509cc7a1c890c00c5868a 2 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f4423cbc34ab06a0b82de6325f2509cc7a1c890c00c5868a 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.9AN 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.9AN 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.9AN 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e116fb30ed018126b87879648f73a364 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.78h 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e116fb30ed018126b87879648f73a364 1 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e116fb30ed018126b87879648f73a364 1 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e116fb30ed018126b87879648f73a364 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:44.623 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.78h 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.78h 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.78h 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ecc96ee05927a55b9cb10fdf2c344036 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.NuT 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ecc96ee05927a55b9cb10fdf2c344036 1 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ecc96ee05927a55b9cb10fdf2c344036 1 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ecc96ee05927a55b9cb10fdf2c344036 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.NuT 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.NuT 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.NuT 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:20:44.882 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2526cca204d2b6ba3f9d739a592a4373d2fb301a3f262dff 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.oTb 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2526cca204d2b6ba3f9d739a592a4373d2fb301a3f262dff 2 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2526cca204d2b6ba3f9d739a592a4373d2fb301a3f262dff 2 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2526cca204d2b6ba3f9d739a592a4373d2fb301a3f262dff 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.oTb 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.oTb 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.oTb 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=abc3aa2ebdee822ea4ea67b73a3d5799 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Bc2 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key abc3aa2ebdee822ea4ea67b73a3d5799 0 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 abc3aa2ebdee822ea4ea67b73a3d5799 0 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=abc3aa2ebdee822ea4ea67b73a3d5799 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Bc2 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Bc2 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Bc2 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=877592a07a9b2be868b93ec7ce7c16b15144ccb4eac71dbdae7652e5c28841c9 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.qeC 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 877592a07a9b2be868b93ec7ce7c16b15144ccb4eac71dbdae7652e5c28841c9 3 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 877592a07a9b2be868b93ec7ce7c16b15144ccb4eac71dbdae7652e5c28841c9 3 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=877592a07a9b2be868b93ec7ce7c16b15144ccb4eac71dbdae7652e5c28841c9 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:20:44.883 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:20:45.142 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.qeC 00:20:45.142 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.qeC 00:20:45.142 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.qeC 00:20:45.142 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:45.142 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 93913 00:20:45.142 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 93913 ']' 00:20:45.142 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.142 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:45.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.142 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.142 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:45.142 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Gvm 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.kVk ]] 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kVk 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.EjR 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.9AN ]] 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9AN 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.78h 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.NuT ]] 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NuT 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.oTb 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.402 16:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Bc2 ]] 00:20:45.402 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Bc2 00:20:45.402 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.402 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.402 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.402 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.qeC 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:45.403 16:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:45.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.971 Waiting for block devices as requested 00:20:45.971 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.971 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:46.538 No valid GPT data, bailing 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:46.538 No valid GPT data, bailing 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:46.538 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:46.539 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:46.539 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:46.539 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:46.798 No valid GPT data, bailing 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:46.798 No valid GPT data, bailing 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -a 10.0.0.1 -t tcp -s 4420 00:20:46.798 00:20:46.798 Discovery Log Number of Records 2, Generation counter 2 00:20:46.798 =====Discovery Log Entry 0====== 00:20:46.798 trtype: tcp 00:20:46.798 adrfam: ipv4 00:20:46.798 subtype: current discovery subsystem 00:20:46.798 treq: not specified, sq flow control disable supported 00:20:46.798 portid: 1 00:20:46.798 trsvcid: 4420 00:20:46.798 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:46.798 traddr: 10.0.0.1 00:20:46.798 eflags: none 00:20:46.798 sectype: none 00:20:46.798 =====Discovery Log Entry 1====== 00:20:46.798 trtype: tcp 00:20:46.798 adrfam: ipv4 00:20:46.798 subtype: nvme subsystem 00:20:46.798 treq: not specified, sq flow control disable supported 00:20:46.798 portid: 1 00:20:46.798 trsvcid: 4420 00:20:46.798 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:46.798 traddr: 10.0.0.1 00:20:46.798 eflags: none 00:20:46.798 sectype: none 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:46.798 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.057 nvme0n1 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.057 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.058 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.316 nvme0n1 00:20:47.316 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.317 16:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.576 nvme0n1 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.576 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.577 nvme0n1 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:47.577 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.836 nvme0n1 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.836 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.837 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:47.837 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.837 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:47.837 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:47.837 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:47.837 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:47.837 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.837 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.095 nvme0n1 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.095 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.354 16:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.614 nvme0n1 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.614 nvme0n1 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.614 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:48.873 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.874 nvme0n1 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:48.874 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.133 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.134 nvme0n1 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.134 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.393 nvme0n1 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.393 16:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.959 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.960 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.218 nvme0n1 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.218 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.476 nvme0n1 00:20:50.476 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.476 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.476 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.476 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.476 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.476 16:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.476 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.734 nvme0n1 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:50.734 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.735 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.994 nvme0n1 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.994 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.253 nvme0n1 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.253 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.254 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.254 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:51.254 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.254 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.254 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:51.254 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:51.254 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:51.254 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:51.254 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.254 16:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:52.647 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:52.648 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:52.920 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.920 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.920 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.179 nvme0n1 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.179 16:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.438 nvme0n1 00:20:53.438 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.438 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.438 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.439 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.007 nvme0n1 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.007 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.008 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.267 nvme0n1 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.267 16:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 nvme0n1 00:20:54.835 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.835 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.835 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.835 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.835 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.835 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.835 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.835 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.835 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.835 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.836 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.404 nvme0n1 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.404 16:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.971 nvme0n1 00:20:55.971 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.972 16:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.540 nvme0n1 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.540 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.108 nvme0n1 00:20:57.108 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.108 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.108 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.108 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.108 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.108 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.108 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.108 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.108 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.108 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.108 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.109 16:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.676 nvme0n1 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:20:57.676 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.677 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.936 nvme0n1 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.936 nvme0n1 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.936 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.196 nvme0n1 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.196 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.455 nvme0n1 00:20:58.455 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.455 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.455 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.455 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.455 16:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.455 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.456 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.714 nvme0n1 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.714 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.715 nvme0n1 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.715 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.974 nvme0n1 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.974 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.975 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.234 nvme0n1 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.234 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.493 nvme0n1 00:20:59.493 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.493 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.493 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.493 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.493 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.493 16:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.493 nvme0n1 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.493 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.752 nvme0n1 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.752 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.011 nvme0n1 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.011 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.270 nvme0n1 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.270 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.529 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.529 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.529 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.529 16:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.529 nvme0n1 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.529 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.788 nvme0n1 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.788 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.046 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.047 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.309 nvme0n1 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.309 16:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.580 nvme0n1 00:21:01.580 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.580 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.580 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.580 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.580 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:01.581 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.857 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.131 nvme0n1 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.131 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.132 16:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.391 nvme0n1 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.391 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.957 nvme0n1 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.957 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.527 nvme0n1 00:21:03.527 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.527 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.527 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.527 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.527 16:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.527 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.096 nvme0n1 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.096 16:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.664 nvme0n1 00:21:04.664 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.664 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.664 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.665 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.233 nvme0n1 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.233 16:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.801 nvme0n1 00:21:05.801 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.801 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.801 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.801 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.801 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.801 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.801 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.802 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.061 nvme0n1 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.061 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.062 nvme0n1 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.062 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.321 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.322 nvme0n1 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.322 16:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.582 nvme0n1 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.582 nvme0n1 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.582 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.841 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.841 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.841 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.841 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.842 nvme0n1 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.842 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.103 nvme0n1 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.103 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.363 nvme0n1 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.363 16:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.363 nvme0n1 00:21:07.363 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.363 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.363 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.363 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.363 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.622 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.623 nvme0n1 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.623 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.882 nvme0n1 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.882 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.142 nvme0n1 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.142 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.401 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.401 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.401 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:08.401 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.402 16:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.402 nvme0n1 00:21:08.402 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.402 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.402 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.402 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.402 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.402 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.402 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.402 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.402 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.402 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.661 nvme0n1 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.661 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:08.662 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.921 nvme0n1 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.921 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.198 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.462 nvme0n1 00:21:09.462 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.462 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.462 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.462 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.462 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.462 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.462 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.462 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.462 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.462 16:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.462 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.462 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.462 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:09.462 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.462 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.462 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.462 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.463 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.722 nvme0n1 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.722 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.290 nvme0n1 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.290 16:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.549 nvme0n1 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.549 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.808 nvme0n1 00:21:10.808 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.808 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.808 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.808 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.808 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.808 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.808 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.808 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.808 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.808 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZmJlMDM4MDQyODFmNzk5NWRjYmM0YThmZWFmNjBkZGXi0a9t: 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: ]] 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmNjNWI2ODk1MWMyMWIyMDQ2MTk5NjgwM2ExZWM4NWRkMzNiZGFlNzY5MGQ3M2I1YzE2NGUwMzQxNDI1YjY5ZmwVvB8=: 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.067 16:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.635 nvme0n1 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.635 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.203 nvme0n1 00:21:12.203 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.204 16:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.771 nvme0n1 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjUyNmNjYTIwNGQyYjZiYTNmOWQ3MzlhNTkyYTQzNzNkMmZiMzAxYTNmMjYyZGZmiwDYXg==: 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: ]] 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YWJjM2FhMmViZGVlODIyZWE0ZWE2N2I3M2EzZDU3OTl4gheT: 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.771 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.339 nvme0n1 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ODc3NTkyYTA3YTliMmJlODY4YjkzZWM3Y2U3YzE2YjE1MTQ0Y2NiNGVhYzcxZGJkYWU3NjUyZTVjMjg4NDFjOcGK/UU=: 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.339 16:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.908 nvme0n1 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.908 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.909 request: 00:21:13.909 { 00:21:13.909 "name": "nvme0", 00:21:13.909 "trtype": "tcp", 00:21:13.909 "traddr": "10.0.0.1", 00:21:13.909 "adrfam": "ipv4", 00:21:13.909 "trsvcid": "4420", 00:21:13.909 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:13.909 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:13.909 "prchk_reftag": false, 00:21:13.909 "prchk_guard": false, 00:21:13.909 "hdgst": false, 00:21:13.909 "ddgst": false, 00:21:13.909 "allow_unrecognized_csi": false, 00:21:13.909 "method": "bdev_nvme_attach_controller", 00:21:13.909 "req_id": 1 00:21:13.909 } 00:21:13.909 Got JSON-RPC error response 00:21:13.909 response: 00:21:13.909 { 00:21:13.909 "code": -5, 00:21:13.909 "message": "Input/output error" 00:21:13.909 } 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.909 request: 00:21:13.909 { 00:21:13.909 "name": "nvme0", 00:21:13.909 "trtype": "tcp", 00:21:13.909 "traddr": "10.0.0.1", 00:21:13.909 "adrfam": "ipv4", 00:21:13.909 "trsvcid": "4420", 00:21:13.909 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:13.909 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:13.909 "prchk_reftag": false, 00:21:13.909 "prchk_guard": false, 00:21:13.909 "hdgst": false, 00:21:13.909 "ddgst": false, 00:21:13.909 "dhchap_key": "key2", 00:21:13.909 "allow_unrecognized_csi": false, 00:21:13.909 "method": "bdev_nvme_attach_controller", 00:21:13.909 "req_id": 1 00:21:13.909 } 00:21:13.909 Got JSON-RPC error response 00:21:13.909 response: 00:21:13.909 { 00:21:13.909 "code": -5, 00:21:13.909 "message": "Input/output error" 00:21:13.909 } 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.909 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.169 request: 00:21:14.169 { 00:21:14.169 "name": "nvme0", 00:21:14.169 "trtype": "tcp", 00:21:14.169 "traddr": "10.0.0.1", 00:21:14.169 "adrfam": "ipv4", 00:21:14.169 "trsvcid": "4420", 00:21:14.169 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:14.169 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:14.169 "prchk_reftag": false, 00:21:14.169 "prchk_guard": false, 00:21:14.169 "hdgst": false, 00:21:14.169 "ddgst": false, 00:21:14.169 "dhchap_key": "key1", 00:21:14.169 "dhchap_ctrlr_key": "ckey2", 00:21:14.169 "allow_unrecognized_csi": false, 00:21:14.169 "method": "bdev_nvme_attach_controller", 00:21:14.169 "req_id": 1 00:21:14.169 } 00:21:14.169 Got JSON-RPC error response 00:21:14.169 response: 00:21:14.169 { 00:21:14.169 "code": -5, 00:21:14.169 "message": "Input/output error" 00:21:14.169 } 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.169 nvme0n1 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.169 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.170 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.429 request: 00:21:14.429 { 00:21:14.429 "name": "nvme0", 00:21:14.429 "dhchap_key": "key1", 00:21:14.429 "dhchap_ctrlr_key": "ckey2", 00:21:14.429 "method": "bdev_nvme_set_keys", 00:21:14.429 "req_id": 1 00:21:14.429 } 00:21:14.429 Got JSON-RPC error response 00:21:14.429 response: 00:21:14.429 { 00:21:14.429 "code": -13, 00:21:14.429 "message": "Permission denied" 00:21:14.429 } 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:14.429 16:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:15.366 16:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.366 16:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:15.366 16:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.366 16:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.366 16:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzAzYTBmOTBmZmExZTMxMzhlNGNmMzNiMjdlYjgxNmNhYWQzNGFjYTFiZjNkZDc0actojw==: 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: ]] 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZjQ0MjNjYmMzNGFiMDZhMGI4MmRlNjMyNWYyNTA5Y2M3YTFjODkwYzAwYzU4NjhhnK54ZA==: 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.366 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.625 nvme0n1 00:21:15.625 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.625 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:15.625 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.625 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.625 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.625 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZTExNmZiMzBlZDAxODEyNmI4Nzg3OTY0OGY3M2EzNjRkHxOA: 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: ]] 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZWNjOTZlZTA1OTI3YTU1YjljYjEwZmRmMmMzNDQwMzZU5ozY: 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.626 request: 00:21:15.626 { 00:21:15.626 "name": "nvme0", 00:21:15.626 "dhchap_key": "key2", 00:21:15.626 "dhchap_ctrlr_key": "ckey1", 00:21:15.626 "method": "bdev_nvme_set_keys", 00:21:15.626 "req_id": 1 00:21:15.626 } 00:21:15.626 Got JSON-RPC error response 00:21:15.626 response: 00:21:15.626 { 00:21:15.626 "code": -13, 00:21:15.626 "message": "Permission denied" 00:21:15.626 } 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:15.626 16:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:16.561 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.561 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:16.561 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.561 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.561 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.561 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:16.561 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:16.561 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:16.561 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:16.561 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.561 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.820 rmmod nvme_tcp 00:21:16.820 rmmod nvme_fabrics 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 93913 ']' 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 93913 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 93913 ']' 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 93913 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93913 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:16.820 killing process with pid 93913 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93913' 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 93913 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 93913 00:21:16.820 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:16.821 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:21:17.080 16:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:18.017 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:18.017 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:18.017 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:18.017 16:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Gvm /tmp/spdk.key-null.EjR /tmp/spdk.key-sha256.78h /tmp/spdk.key-sha384.oTb /tmp/spdk.key-sha512.qeC /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:18.017 16:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:18.585 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:18.585 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:18.585 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:18.585 00:21:18.585 real 0m35.086s 00:21:18.585 user 0m32.619s 00:21:18.585 sys 0m3.752s 00:21:18.585 16:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.585 16:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.585 ************************************ 00:21:18.585 END TEST nvmf_auth_host 00:21:18.585 ************************************ 00:21:18.585 16:16:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:18.585 16:16:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:18.585 16:16:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:18.585 16:16:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.585 16:16:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.585 ************************************ 00:21:18.585 START TEST nvmf_digest 00:21:18.585 ************************************ 00:21:18.585 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:18.585 * Looking for test storage... 00:21:18.585 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:18.585 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:18.585 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:21:18.585 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:18.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.846 --rc genhtml_branch_coverage=1 00:21:18.846 --rc genhtml_function_coverage=1 00:21:18.846 --rc genhtml_legend=1 00:21:18.846 --rc geninfo_all_blocks=1 00:21:18.846 --rc geninfo_unexecuted_blocks=1 00:21:18.846 00:21:18.846 ' 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:18.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.846 --rc genhtml_branch_coverage=1 00:21:18.846 --rc genhtml_function_coverage=1 00:21:18.846 --rc genhtml_legend=1 00:21:18.846 --rc geninfo_all_blocks=1 00:21:18.846 --rc geninfo_unexecuted_blocks=1 00:21:18.846 00:21:18.846 ' 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:18.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.846 --rc genhtml_branch_coverage=1 00:21:18.846 --rc genhtml_function_coverage=1 00:21:18.846 --rc genhtml_legend=1 00:21:18.846 --rc geninfo_all_blocks=1 00:21:18.846 --rc geninfo_unexecuted_blocks=1 00:21:18.846 00:21:18.846 ' 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:18.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.846 --rc genhtml_branch_coverage=1 00:21:18.846 --rc genhtml_function_coverage=1 00:21:18.846 --rc genhtml_legend=1 00:21:18.846 --rc geninfo_all_blocks=1 00:21:18.846 --rc geninfo_unexecuted_blocks=1 00:21:18.846 00:21:18.846 ' 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.846 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:18.846 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:18.847 Cannot find device "nvmf_init_br" 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:18.847 Cannot find device "nvmf_init_br2" 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:18.847 Cannot find device "nvmf_tgt_br" 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:18.847 Cannot find device "nvmf_tgt_br2" 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:18.847 Cannot find device "nvmf_init_br" 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:18.847 Cannot find device "nvmf_init_br2" 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:18.847 Cannot find device "nvmf_tgt_br" 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:18.847 Cannot find device "nvmf_tgt_br2" 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:18.847 Cannot find device "nvmf_br" 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:18.847 Cannot find device "nvmf_init_if" 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:18.847 Cannot find device "nvmf_init_if2" 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:18.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:18.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:18.847 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:19.107 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:19.107 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:21:19.107 00:21:19.107 --- 10.0.0.3 ping statistics --- 00:21:19.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.107 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:19.107 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:19.107 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:21:19.107 00:21:19.107 --- 10.0.0.4 ping statistics --- 00:21:19.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.107 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:19.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:21:19.107 00:21:19.107 --- 10.0.0.1 ping statistics --- 00:21:19.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.107 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:19.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:21:19.107 00:21:19.107 --- 10.0.0.2 ping statistics --- 00:21:19.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.107 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:19.107 ************************************ 00:21:19.107 START TEST nvmf_digest_clean 00:21:19.107 ************************************ 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=95533 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 95533 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95533 ']' 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.107 16:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.367 [2024-11-19 16:16:25.830423] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:21:19.367 [2024-11-19 16:16:25.830528] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.367 [2024-11-19 16:16:25.986197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.367 [2024-11-19 16:16:26.010109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.367 [2024-11-19 16:16:26.010164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.367 [2024-11-19 16:16:26.010179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.367 [2024-11-19 16:16:26.010190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.367 [2024-11-19 16:16:26.010199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.367 [2024-11-19 16:16:26.010582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.367 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.367 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:19.367 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.367 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.367 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.626 [2024-11-19 16:16:26.143078] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:19.626 null0 00:21:19.626 [2024-11-19 16:16:26.177448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.626 [2024-11-19 16:16:26.201594] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95558 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95558 /var/tmp/bperf.sock 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95558 ']' 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:19.626 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:19.627 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:19.627 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.627 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.627 [2024-11-19 16:16:26.256261] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:21:19.627 [2024-11-19 16:16:26.256351] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95558 ] 00:21:19.886 [2024-11-19 16:16:26.405664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.886 [2024-11-19 16:16:26.430006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.886 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.886 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:19.886 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:19.886 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:19.886 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:20.145 [2024-11-19 16:16:26.794602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:20.145 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:20.145 16:16:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:20.404 nvme0n1 00:21:20.404 16:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:20.404 16:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:20.663 Running I/O for 2 seconds... 00:21:22.978 17653.00 IOPS, 68.96 MiB/s [2024-11-19T16:16:29.693Z] 17653.00 IOPS, 68.96 MiB/s 00:21:22.978 Latency(us) 00:21:22.978 [2024-11-19T16:16:29.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.978 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:22.978 nvme0n1 : 2.01 17686.00 69.09 0.00 0.00 7232.92 6702.55 17873.45 00:21:22.978 [2024-11-19T16:16:29.693Z] =================================================================================================================== 00:21:22.978 [2024-11-19T16:16:29.693Z] Total : 17686.00 69.09 0.00 0.00 7232.92 6702.55 17873.45 00:21:22.978 { 00:21:22.978 "results": [ 00:21:22.978 { 00:21:22.978 "job": "nvme0n1", 00:21:22.978 "core_mask": "0x2", 00:21:22.978 "workload": "randread", 00:21:22.978 "status": "finished", 00:21:22.978 "queue_depth": 128, 00:21:22.978 "io_size": 4096, 00:21:22.978 "runtime": 2.010686, 00:21:22.978 "iops": 17686.003682325336, 00:21:22.978 "mibps": 69.08595188408334, 00:21:22.978 "io_failed": 0, 00:21:22.978 "io_timeout": 0, 00:21:22.978 "avg_latency_us": 7232.921701046345, 00:21:22.978 "min_latency_us": 6702.545454545455, 00:21:22.978 "max_latency_us": 17873.454545454544 00:21:22.978 } 00:21:22.978 ], 00:21:22.978 "core_count": 1 00:21:22.978 } 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:22.978 | select(.opcode=="crc32c") 00:21:22.978 | "\(.module_name) \(.executed)"' 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95558 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95558 ']' 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95558 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95558 00:21:22.978 killing process with pid 95558 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95558' 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95558 00:21:22.978 Received shutdown signal, test time was about 2.000000 seconds 00:21:22.978 00:21:22.978 Latency(us) 00:21:22.978 [2024-11-19T16:16:29.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.978 [2024-11-19T16:16:29.693Z] =================================================================================================================== 00:21:22.978 [2024-11-19T16:16:29.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.978 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95558 00:21:23.237 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95608 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95608 /var/tmp/bperf.sock 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95608 ']' 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.238 16:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:23.238 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:23.238 Zero copy mechanism will not be used. 00:21:23.238 [2024-11-19 16:16:29.791835] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:21:23.238 [2024-11-19 16:16:29.791928] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95608 ] 00:21:23.238 [2024-11-19 16:16:29.939404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.496 [2024-11-19 16:16:29.959782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.496 16:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.496 16:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:23.496 16:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:23.496 16:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:23.496 16:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:23.755 [2024-11-19 16:16:30.324479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:23.755 16:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:23.755 16:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.014 nvme0n1 00:21:24.014 16:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:24.014 16:16:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:24.273 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:24.273 Zero copy mechanism will not be used. 00:21:24.273 Running I/O for 2 seconds... 00:21:26.145 8432.00 IOPS, 1054.00 MiB/s [2024-11-19T16:16:32.860Z] 8488.00 IOPS, 1061.00 MiB/s 00:21:26.145 Latency(us) 00:21:26.145 [2024-11-19T16:16:32.860Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.145 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:26.145 nvme0n1 : 2.00 8486.60 1060.82 0.00 0.00 1882.54 1668.19 7685.59 00:21:26.145 [2024-11-19T16:16:32.860Z] =================================================================================================================== 00:21:26.145 [2024-11-19T16:16:32.860Z] Total : 8486.60 1060.82 0.00 0.00 1882.54 1668.19 7685.59 00:21:26.145 { 00:21:26.145 "results": [ 00:21:26.145 { 00:21:26.145 "job": "nvme0n1", 00:21:26.145 "core_mask": "0x2", 00:21:26.145 "workload": "randread", 00:21:26.145 "status": "finished", 00:21:26.145 "queue_depth": 16, 00:21:26.145 "io_size": 131072, 00:21:26.145 "runtime": 2.002216, 00:21:26.145 "iops": 8486.596850689437, 00:21:26.145 "mibps": 1060.8246063361796, 00:21:26.145 "io_failed": 0, 00:21:26.145 "io_timeout": 0, 00:21:26.145 "avg_latency_us": 1882.5363876048623, 00:21:26.145 "min_latency_us": 1668.189090909091, 00:21:26.145 "max_latency_us": 7685.585454545455 00:21:26.145 } 00:21:26.145 ], 00:21:26.145 "core_count": 1 00:21:26.145 } 00:21:26.145 16:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:26.145 16:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:26.145 16:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:26.145 16:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:26.145 16:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:26.145 | select(.opcode=="crc32c") 00:21:26.145 | "\(.module_name) \(.executed)"' 00:21:26.403 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:26.403 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:26.403 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:26.403 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:26.403 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95608 00:21:26.403 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95608 ']' 00:21:26.404 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95608 00:21:26.404 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:26.404 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.404 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95608 00:21:26.404 killing process with pid 95608 00:21:26.404 Received shutdown signal, test time was about 2.000000 seconds 00:21:26.404 00:21:26.404 Latency(us) 00:21:26.404 [2024-11-19T16:16:33.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.404 [2024-11-19T16:16:33.119Z] =================================================================================================================== 00:21:26.404 [2024-11-19T16:16:33.119Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.404 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:26.404 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:26.404 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95608' 00:21:26.404 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95608 00:21:26.404 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95608 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95661 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95661 /var/tmp/bperf.sock 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95661 ']' 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:26.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:26.662 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:26.662 [2024-11-19 16:16:33.245371] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:21:26.662 [2024-11-19 16:16:33.245454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95661 ] 00:21:26.921 [2024-11-19 16:16:33.385690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.921 [2024-11-19 16:16:33.405358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:26.921 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.921 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:26.921 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:26.921 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:26.921 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:27.180 [2024-11-19 16:16:33.724850] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:27.180 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.181 16:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.439 nvme0n1 00:21:27.439 16:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:27.439 16:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:27.698 Running I/O for 2 seconds... 00:21:29.591 19051.00 IOPS, 74.42 MiB/s [2024-11-19T16:16:36.306Z] 19114.00 IOPS, 74.66 MiB/s 00:21:29.591 Latency(us) 00:21:29.591 [2024-11-19T16:16:36.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.591 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:29.591 nvme0n1 : 2.01 19125.07 74.71 0.00 0.00 6687.39 3098.07 15073.28 00:21:29.591 [2024-11-19T16:16:36.306Z] =================================================================================================================== 00:21:29.591 [2024-11-19T16:16:36.306Z] Total : 19125.07 74.71 0.00 0.00 6687.39 3098.07 15073.28 00:21:29.591 { 00:21:29.591 "results": [ 00:21:29.591 { 00:21:29.591 "job": "nvme0n1", 00:21:29.591 "core_mask": "0x2", 00:21:29.591 "workload": "randwrite", 00:21:29.591 "status": "finished", 00:21:29.591 "queue_depth": 128, 00:21:29.591 "io_size": 4096, 00:21:29.591 "runtime": 2.005535, 00:21:29.591 "iops": 19125.07136499737, 00:21:29.591 "mibps": 74.70731001952097, 00:21:29.591 "io_failed": 0, 00:21:29.591 "io_timeout": 0, 00:21:29.591 "avg_latency_us": 6687.38587813688, 00:21:29.591 "min_latency_us": 3098.0654545454545, 00:21:29.591 "max_latency_us": 15073.28 00:21:29.591 } 00:21:29.591 ], 00:21:29.591 "core_count": 1 00:21:29.591 } 00:21:29.591 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:29.591 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:29.591 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:29.591 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:29.591 | select(.opcode=="crc32c") 00:21:29.591 | "\(.module_name) \(.executed)"' 00:21:29.591 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95661 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95661 ']' 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95661 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95661 00:21:29.876 killing process with pid 95661 00:21:29.876 Received shutdown signal, test time was about 2.000000 seconds 00:21:29.876 00:21:29.876 Latency(us) 00:21:29.876 [2024-11-19T16:16:36.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.876 [2024-11-19T16:16:36.591Z] =================================================================================================================== 00:21:29.876 [2024-11-19T16:16:36.591Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95661' 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95661 00:21:29.876 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95661 00:21:30.144 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:30.144 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:30.144 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:30.144 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:30.144 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:30.144 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:30.144 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:30.144 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95709 00:21:30.144 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95709 /var/tmp/bperf.sock 00:21:30.144 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:30.144 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 95709 ']' 00:21:30.145 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:30.145 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.145 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:30.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:30.145 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.145 16:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:30.145 [2024-11-19 16:16:36.701198] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:21:30.145 [2024-11-19 16:16:36.701515] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95709 ] 00:21:30.145 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:30.145 Zero copy mechanism will not be used. 00:21:30.145 [2024-11-19 16:16:36.847374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.415 [2024-11-19 16:16:36.870842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.984 16:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.984 16:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:21:30.984 16:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:30.984 16:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:30.984 16:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:31.243 [2024-11-19 16:16:37.890477] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:31.243 16:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:31.243 16:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:31.503 nvme0n1 00:21:31.762 16:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:31.762 16:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:31.762 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:31.762 Zero copy mechanism will not be used. 00:21:31.762 Running I/O for 2 seconds... 00:21:34.075 7132.00 IOPS, 891.50 MiB/s [2024-11-19T16:16:40.790Z] 7291.00 IOPS, 911.38 MiB/s 00:21:34.076 Latency(us) 00:21:34.076 [2024-11-19T16:16:40.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.076 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:34.076 nvme0n1 : 2.00 7286.18 910.77 0.00 0.00 2190.91 1571.37 9889.98 00:21:34.076 [2024-11-19T16:16:40.791Z] =================================================================================================================== 00:21:34.076 [2024-11-19T16:16:40.791Z] Total : 7286.18 910.77 0.00 0.00 2190.91 1571.37 9889.98 00:21:34.076 { 00:21:34.076 "results": [ 00:21:34.076 { 00:21:34.076 "job": "nvme0n1", 00:21:34.076 "core_mask": "0x2", 00:21:34.076 "workload": "randwrite", 00:21:34.076 "status": "finished", 00:21:34.076 "queue_depth": 16, 00:21:34.076 "io_size": 131072, 00:21:34.076 "runtime": 2.004342, 00:21:34.076 "iops": 7286.18169953032, 00:21:34.076 "mibps": 910.77271244129, 00:21:34.076 "io_failed": 0, 00:21:34.076 "io_timeout": 0, 00:21:34.076 "avg_latency_us": 2190.905891785563, 00:21:34.076 "min_latency_us": 1571.3745454545453, 00:21:34.076 "max_latency_us": 9889.978181818182 00:21:34.076 } 00:21:34.076 ], 00:21:34.076 "core_count": 1 00:21:34.076 } 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:34.076 | select(.opcode=="crc32c") 00:21:34.076 | "\(.module_name) \(.executed)"' 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95709 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95709 ']' 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95709 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95709 00:21:34.076 killing process with pid 95709 00:21:34.076 Received shutdown signal, test time was about 2.000000 seconds 00:21:34.076 00:21:34.076 Latency(us) 00:21:34.076 [2024-11-19T16:16:40.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.076 [2024-11-19T16:16:40.791Z] =================================================================================================================== 00:21:34.076 [2024-11-19T16:16:40.791Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95709' 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95709 00:21:34.076 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95709 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 95533 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 95533 ']' 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 95533 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95533 00:21:34.336 killing process with pid 95533 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95533' 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 95533 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 95533 00:21:34.336 00:21:34.336 real 0m15.189s 00:21:34.336 user 0m29.528s 00:21:34.336 sys 0m4.480s 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:34.336 ************************************ 00:21:34.336 END TEST nvmf_digest_clean 00:21:34.336 ************************************ 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:34.336 ************************************ 00:21:34.336 START TEST nvmf_digest_error 00:21:34.336 ************************************ 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.336 16:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.336 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=95791 00:21:34.336 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 95791 00:21:34.336 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95791 ']' 00:21:34.336 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:34.336 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.336 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.336 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.336 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.336 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.595 [2024-11-19 16:16:41.055926] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:21:34.595 [2024-11-19 16:16:41.056014] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.595 [2024-11-19 16:16:41.193087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.595 [2024-11-19 16:16:41.210746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.595 [2024-11-19 16:16:41.211007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.595 [2024-11-19 16:16:41.211155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.595 [2024-11-19 16:16:41.211206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.595 [2024-11-19 16:16:41.211358] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.595 [2024-11-19 16:16:41.211708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.595 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.595 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:34.595 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:34.595 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.596 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.596 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.596 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:34.596 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.596 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.596 [2024-11-19 16:16:41.284157] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:34.596 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.596 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:34.596 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:34.596 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.596 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.855 [2024-11-19 16:16:41.324357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:34.855 null0 00:21:34.855 [2024-11-19 16:16:41.355297] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.855 [2024-11-19 16:16:41.379399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95817 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95817 /var/tmp/bperf.sock 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95817 ']' 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:34.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.855 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.855 [2024-11-19 16:16:41.446191] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:21:34.855 [2024-11-19 16:16:41.446507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95817 ] 00:21:35.114 [2024-11-19 16:16:41.595585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.114 [2024-11-19 16:16:41.615187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.114 [2024-11-19 16:16:41.644054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:35.115 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.115 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:35.115 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:35.115 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:35.374 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:35.374 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.374 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.374 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.374 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:35.374 16:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:35.633 nvme0n1 00:21:35.633 16:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:35.633 16:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.633 16:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.633 16:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.633 16:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:35.633 16:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:35.892 Running I/O for 2 seconds... 00:21:35.892 [2024-11-19 16:16:42.386187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.386250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.386294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.401329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.401376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.401389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.415444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.415479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.415508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.429538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.429575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.429588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.443613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.443648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.443676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.458280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.458311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.458323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.472463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.472498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.472527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.486392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.486429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.486442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.500453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.500487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.500515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.514504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.514720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.514738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.529011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.529046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.529090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.543262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.543323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.543338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.892 [2024-11-19 16:16:42.557279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.892 [2024-11-19 16:16:42.557313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.892 [2024-11-19 16:16:42.557342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.893 [2024-11-19 16:16:42.571301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.893 [2024-11-19 16:16:42.571343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.893 [2024-11-19 16:16:42.571355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.893 [2024-11-19 16:16:42.585332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.893 [2024-11-19 16:16:42.585366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.893 [2024-11-19 16:16:42.585394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:35.893 [2024-11-19 16:16:42.599412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:35.893 [2024-11-19 16:16:42.599447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:35.893 [2024-11-19 16:16:42.599460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.152 [2024-11-19 16:16:42.614776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.152 [2024-11-19 16:16:42.614965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-11-19 16:16:42.614999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.152 [2024-11-19 16:16:42.629174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.152 [2024-11-19 16:16:42.629225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-11-19 16:16:42.629283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.152 [2024-11-19 16:16:42.643343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.152 [2024-11-19 16:16:42.643376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-11-19 16:16:42.643406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.152 [2024-11-19 16:16:42.657359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.152 [2024-11-19 16:16:42.657394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.152 [2024-11-19 16:16:42.657407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.152 [2024-11-19 16:16:42.671372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.152 [2024-11-19 16:16:42.671406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.671434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.685352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.685386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.685398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.699323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.699355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.699383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.713335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.713373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.713385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.727425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.727459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.727487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.741455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.741490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.741503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.755476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.755509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.755538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.769386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.769421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.769433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.783453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.783488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.783516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.797400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.797436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.797449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.811495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.811528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.811557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.825509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.825544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.825556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.839656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.839689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.839718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.153 [2024-11-19 16:16:42.853659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.153 [2024-11-19 16:16:42.853692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.153 [2024-11-19 16:16:42.853720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:42.868903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:42.868970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:42.869000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:42.884026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:42.884061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:42.884090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:42.898698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:42.898893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:42.898911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:42.913320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:42.913521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:42.913540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:42.927689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:42.927725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:42.927753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:42.941839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:42.941873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:42.941902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:42.956165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:42.956198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:42.956226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:42.970142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:42.970177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:42.970205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:42.984235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:42.984313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:42.984339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:42.998273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:42.998307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:42.998335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:43.012336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:43.012527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:43.012546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:43.026604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:43.026813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:43.026831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:43.040895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.419 [2024-11-19 16:16:43.040930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.419 [2024-11-19 16:16:43.040958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.419 [2024-11-19 16:16:43.055109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.420 [2024-11-19 16:16:43.055143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.420 [2024-11-19 16:16:43.055172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.420 [2024-11-19 16:16:43.069300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.420 [2024-11-19 16:16:43.069331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14032 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.420 [2024-11-19 16:16:43.069343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.420 [2024-11-19 16:16:43.083377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.420 [2024-11-19 16:16:43.083411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.420 [2024-11-19 16:16:43.083440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.420 [2024-11-19 16:16:43.097433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.420 [2024-11-19 16:16:43.097468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.420 [2024-11-19 16:16:43.097480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.420 [2024-11-19 16:16:43.111516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.420 [2024-11-19 16:16:43.111548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.420 [2024-11-19 16:16:43.111578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.420 [2024-11-19 16:16:43.125830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.420 [2024-11-19 16:16:43.125862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.420 [2024-11-19 16:16:43.125890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.141130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.141328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.141345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.155600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.155819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.155953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.170137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.170371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.170497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.185870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.186111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.186300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.204075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.204330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.204467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.220925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.221146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.221360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.236358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.236577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.236701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.251037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.251281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:11663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.251417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.265548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.265764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.265887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.280073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.280299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.280424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.300831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.301050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.301173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.315471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.315708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.315842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.331500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.331536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.331565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.348221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.348304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.348337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 17332.00 IOPS, 67.70 MiB/s [2024-11-19T16:16:43.397Z] [2024-11-19 16:16:43.364641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.364695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.364725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.682 [2024-11-19 16:16:43.380713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.682 [2024-11-19 16:16:43.380752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.682 [2024-11-19 16:16:43.380783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.396427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.396466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.396496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.411803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.411842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.411872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.427086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.427126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.427156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.442132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.442169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.442199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.457054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.457091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.457121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.472106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.472142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.472171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.487341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.487376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.487405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.502220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.502316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.502332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.516933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.516969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:23174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.516998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.531221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.531281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.531311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.545401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.545594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.545613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.559766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.559803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.559832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.573833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.573867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.573895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.587989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.588023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.588052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.601970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.602004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.602033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.616195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.616229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.616286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.630450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.630485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.942 [2024-11-19 16:16:43.630497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:36.942 [2024-11-19 16:16:43.644774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:36.942 [2024-11-19 16:16:43.644808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:36.943 [2024-11-19 16:16:43.644837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.202 [2024-11-19 16:16:43.660399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.202 [2024-11-19 16:16:43.660433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.202 [2024-11-19 16:16:43.660461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.202 [2024-11-19 16:16:43.674419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.202 [2024-11-19 16:16:43.674455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.202 [2024-11-19 16:16:43.674467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.202 [2024-11-19 16:16:43.688492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.202 [2024-11-19 16:16:43.688525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:11834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.202 [2024-11-19 16:16:43.688553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.202 [2024-11-19 16:16:43.702439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.202 [2024-11-19 16:16:43.702475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.202 [2024-11-19 16:16:43.702487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.202 [2024-11-19 16:16:43.716584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.202 [2024-11-19 16:16:43.716617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.202 [2024-11-19 16:16:43.716646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.730971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.731234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.731266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.745271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.745492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.745616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.759919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.760139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.760339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.774523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.774741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.774882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.789060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.789285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.789411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.803773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.803978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.804117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.818412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.818628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.818759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.833309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.833524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.833647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.847902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.848121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.848255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.863177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.863212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.863240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.877854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.877889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.877918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.892011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.892045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.892073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.203 [2024-11-19 16:16:43.906044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.203 [2024-11-19 16:16:43.906081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.203 [2024-11-19 16:16:43.906109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.463 [2024-11-19 16:16:43.921484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.463 [2024-11-19 16:16:43.921518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.463 [2024-11-19 16:16:43.921546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.463 [2024-11-19 16:16:43.937029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.463 [2024-11-19 16:16:43.937066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.463 [2024-11-19 16:16:43.937111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.463 [2024-11-19 16:16:43.953788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.463 [2024-11-19 16:16:43.953823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.463 [2024-11-19 16:16:43.953851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.463 [2024-11-19 16:16:43.967835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.463 [2024-11-19 16:16:43.967868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.463 [2024-11-19 16:16:43.967897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.463 [2024-11-19 16:16:43.982015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.463 [2024-11-19 16:16:43.982049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.463 [2024-11-19 16:16:43.982077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.463 [2024-11-19 16:16:43.996103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:43.996137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:43.996165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.010159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.010193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.010223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.024167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.024201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.024229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.038377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.038557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.038575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.052606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.052644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.052673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.066892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.066928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.066941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.080910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.080944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.080972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.094984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.095034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.095062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.109113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.109147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.109175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.123310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.123350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.123362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.137498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.137531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.137560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.151741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.151774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.151803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.464 [2024-11-19 16:16:44.165708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.464 [2024-11-19 16:16:44.165741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.464 [2024-11-19 16:16:44.165770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.181319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.181352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.181380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.195448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.195481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.195509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.210900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.211157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.211174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.228369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.228411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.228426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.251113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.251319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.251354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.266130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.266345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.266363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.280417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.280604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.280621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.294658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.294869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.294887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.309228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.309290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.309321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.323601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.323635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.323663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.337841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.337875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.337904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 [2024-11-19 16:16:44.352345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.352379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.724 [2024-11-19 16:16:44.352408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.724 17268.00 IOPS, 67.45 MiB/s [2024-11-19T16:16:44.439Z] [2024-11-19 16:16:44.368850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcd6d60) 00:21:37.724 [2024-11-19 16:16:44.368886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.725 [2024-11-19 16:16:44.368915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.725 00:21:37.725 Latency(us) 00:21:37.725 [2024-11-19T16:16:44.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.725 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:37.725 nvme0n1 : 2.01 17311.75 67.62 0.00 0.00 7388.40 6702.55 27644.28 00:21:37.725 [2024-11-19T16:16:44.440Z] =================================================================================================================== 00:21:37.725 [2024-11-19T16:16:44.440Z] Total : 17311.75 67.62 0.00 0.00 7388.40 6702.55 27644.28 00:21:37.725 { 00:21:37.725 "results": [ 00:21:37.725 { 00:21:37.725 "job": "nvme0n1", 00:21:37.725 "core_mask": "0x2", 00:21:37.725 "workload": "randread", 00:21:37.725 "status": "finished", 00:21:37.725 "queue_depth": 128, 00:21:37.725 "io_size": 4096, 00:21:37.725 "runtime": 2.009675, 00:21:37.725 "iops": 17311.754388147336, 00:21:37.725 "mibps": 67.62404057870053, 00:21:37.725 "io_failed": 0, 00:21:37.725 "io_timeout": 0, 00:21:37.725 "avg_latency_us": 7388.403075612554, 00:21:37.725 "min_latency_us": 6702.545454545455, 00:21:37.725 "max_latency_us": 27644.276363636363 00:21:37.725 } 00:21:37.725 ], 00:21:37.725 "core_count": 1 00:21:37.725 } 00:21:37.725 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:37.725 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:37.725 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:37.725 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:37.725 | .driver_specific 00:21:37.725 | .nvme_error 00:21:37.725 | .status_code 00:21:37.725 | .command_transient_transport_error' 00:21:37.984 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:21:37.984 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95817 00:21:37.984 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95817 ']' 00:21:37.984 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95817 00:21:37.984 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:37.984 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.984 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95817 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95817' 00:21:38.244 killing process with pid 95817 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95817 00:21:38.244 Received shutdown signal, test time was about 2.000000 seconds 00:21:38.244 00:21:38.244 Latency(us) 00:21:38.244 [2024-11-19T16:16:44.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.244 [2024-11-19T16:16:44.959Z] =================================================================================================================== 00:21:38.244 [2024-11-19T16:16:44.959Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95817 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95865 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95865 /var/tmp/bperf.sock 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95865 ']' 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:38.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.244 16:16:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:38.244 [2024-11-19 16:16:44.881753] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:21:38.244 [2024-11-19 16:16:44.882024] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:21:38.244 Zero copy mechanism will not be used. 00:21:38.244 llocations --file-prefix=spdk_pid95865 ] 00:21:38.504 [2024-11-19 16:16:45.025448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.504 [2024-11-19 16:16:45.045059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.504 [2024-11-19 16:16:45.073954] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:38.504 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.504 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:38.504 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:38.504 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:38.763 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:38.763 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.763 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:38.763 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.763 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:38.763 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:39.022 nvme0n1 00:21:39.022 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:39.022 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.022 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:39.022 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.022 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:39.022 16:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:39.282 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:39.282 Zero copy mechanism will not be used. 00:21:39.282 Running I/O for 2 seconds... 00:21:39.282 [2024-11-19 16:16:45.774381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.282 [2024-11-19 16:16:45.774645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.282 [2024-11-19 16:16:45.774796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.282 [2024-11-19 16:16:45.779072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.282 [2024-11-19 16:16:45.779292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.282 [2024-11-19 16:16:45.779311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.282 [2024-11-19 16:16:45.783522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.282 [2024-11-19 16:16:45.783559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.282 [2024-11-19 16:16:45.783587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.282 [2024-11-19 16:16:45.787560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.282 [2024-11-19 16:16:45.787595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.282 [2024-11-19 16:16:45.787623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.282 [2024-11-19 16:16:45.791602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.282 [2024-11-19 16:16:45.791637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.282 [2024-11-19 16:16:45.791665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.282 [2024-11-19 16:16:45.795687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.282 [2024-11-19 16:16:45.795722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.282 [2024-11-19 16:16:45.795750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.282 [2024-11-19 16:16:45.799789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.282 [2024-11-19 16:16:45.799824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.282 [2024-11-19 16:16:45.799852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.282 [2024-11-19 16:16:45.804083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.282 [2024-11-19 16:16:45.804120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.282 [2024-11-19 16:16:45.804148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.808100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.808136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.808165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.812165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.812199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.812226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.816311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.816345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.816373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.820425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.820460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.820489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.824731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.824767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.824796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.829008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.829045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.829074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.833395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.833430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.833458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.837402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.837436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.837464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.841596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.841647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.841691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.845697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.845732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.845761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.849684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.849719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.849747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.853725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.853761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.853790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.857699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.857734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.857762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.861823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.861859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.861887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.865831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.865866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.865894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.869900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.869934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.869962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.873908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.873943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.873972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.878022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.878058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.878086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.882167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.882203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.882231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.886220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.886278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.886291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.890331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.890369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.890398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.894308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.894342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.894370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.898272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.898306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.898335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.902284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.902319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.902347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.906554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.906590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.906618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.910611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.910648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.283 [2024-11-19 16:16:45.910683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.283 [2024-11-19 16:16:45.914629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.283 [2024-11-19 16:16:45.914664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.914714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.918739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.918775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.918788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.922813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.922851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.922864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.926892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.926930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.926944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.930984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.931049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.931092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.935155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.935190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.935218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.939147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.939181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.939209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.943031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.943081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.943109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.946937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.946988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.947030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.950920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.950956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.950969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.954782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.954818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.954830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.958622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.958656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.958676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.962566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.962601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.962614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.966429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.966463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.966475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.970178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.970211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.970239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.974014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.974048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.974075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.977996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.978029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.978057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.981957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.981990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.982018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.985824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.985858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.985885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.284 [2024-11-19 16:16:45.989874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.284 [2024-11-19 16:16:45.989910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.284 [2024-11-19 16:16:45.989938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:45.994393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:45.994428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:45.994456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:45.998391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:45.998424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:45.998452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.002515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.002549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.002576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.006602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.006677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.006707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.010645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.010701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.010730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.014581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.014614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.014643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.018549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.018581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.018609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.022388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.022421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.022448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.026211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.026288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.026301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.030150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.030184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.030212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.034047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.034080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.034108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.037853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.037886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.037914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.041757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.041791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.041819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.045694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.045728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.045755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.049508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.049543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.545 [2024-11-19 16:16:46.049555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.545 [2024-11-19 16:16:46.053481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.545 [2024-11-19 16:16:46.053515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.053528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.057265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.057295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.057306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.061023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.061225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.061243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.065328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.065361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.065372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.069177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.069372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.069389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.073211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.073428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.073446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.077420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.077504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.077679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.081814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.082021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.082141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.086027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.086218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.086373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.090352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.090547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.090720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.094930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.095126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.095271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.099321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.099524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.099644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.103695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.103900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.104023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.108040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.108266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.108446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.112592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.112810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.112943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.116949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.117155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.117291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.121308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.121476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.121509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.125412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.125446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.125473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.129302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.129336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.129363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.133184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.133219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.133247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.137049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.137083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.137111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.140949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.140983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.141011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.144937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.144971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.144998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.148888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.148921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.148949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.152924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.152957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.152985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.156990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.157024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.157052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.160989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.161023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.546 [2024-11-19 16:16:46.161050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.546 [2024-11-19 16:16:46.164894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.546 [2024-11-19 16:16:46.164927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.164955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.168775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.168807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.168835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.172686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.172719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.172747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.176672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.176705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.176733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.180617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.180651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.180663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.184538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.184571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.184599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.188505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.188538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.188566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.192425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.192458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.192486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.196301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.196332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.196360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.200223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.200442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.200459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.204433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.204469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.204481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.208410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.208446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.208458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.212370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.212405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.212417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.216225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.216283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.216295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.220102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.220307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.220324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.224228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.224436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.224453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.228402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.228485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.228664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.232710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.232913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.233054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.237160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.237382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.237507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.241563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.241770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.241894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.245897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.246089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.246226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.250737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.250933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.547 [2024-11-19 16:16:46.251086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.547 [2024-11-19 16:16:46.256066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.547 [2024-11-19 16:16:46.256314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.256445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.261242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.261489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.261637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.266780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.266965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.267138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.272010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.272219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.272390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.276913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.276948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.276976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.281394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.281430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.281459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.285853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.285889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.285917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.290149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.290183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.290211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.294334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.294366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.294377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.298346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.298382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.298393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.302140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.302173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.302201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.306052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.306087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.306115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.310367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.310417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.310445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.314435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.314470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.314497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.318804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.318842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.318855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.322860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.322897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.322910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.326895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.326931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.326945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.330827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.330864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.330877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.334724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.334760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.334773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.338594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.809 [2024-11-19 16:16:46.338628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.809 [2024-11-19 16:16:46.338655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.809 [2024-11-19 16:16:46.342510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.342543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.342571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.346376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.346409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.346437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.350220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.350297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.350311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.354061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.354095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.354122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.357938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.357973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.358000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.361768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.361801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.361828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.365823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.365857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.365885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.369552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.369587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.369599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.373328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.373360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.373371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.377160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.377194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.377222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.381015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.381051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.381062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.384885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.384919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.384946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.388901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.388934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.388961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.392909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.392942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.392969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.396938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.396972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.396999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.400969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.401002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.401030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.404970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.405003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.405031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.408902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.408934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.408961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.412951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.412984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.413012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.416837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.416870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.416898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.420796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.420829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.420858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.424754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.424788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.424816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.428720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.428756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.428783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.432602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.432651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.432678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.436581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.436618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.436630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.440485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.440520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.440532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.444401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.810 [2024-11-19 16:16:46.444436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.810 [2024-11-19 16:16:46.444448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.810 [2024-11-19 16:16:46.448347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.448382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.448394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.452173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.452385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.452401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.456217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.456416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.456485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.460305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.460513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.460646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.464536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.464759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.464882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.468798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.468989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.469203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.473168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.473387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.473509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.477424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.477640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.477771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.481804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.481997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.482163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.486045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.486271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.486397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.491596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.491853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.492144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.497331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.497544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.497754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.503124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.503159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.503171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.507205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.507261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.507274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.511155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.511200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.511211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.515085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.515130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.515141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:39.811 [2024-11-19 16:16:46.519478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:39.811 [2024-11-19 16:16:46.519539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.811 [2024-11-19 16:16:46.519551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.523704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.523750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.523761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.528085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.528130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.528141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.532048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.532094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.532105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.536059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.536104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.536115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.540077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.540122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.540133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.544001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.544046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.544057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.547984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.548030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.548041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.552330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.552374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.552386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.556215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.556272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.556284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.560183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.560228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.560240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.564058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.564103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.564114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.567981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.568027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.568038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.571924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.571969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.571980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.575907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.575952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.575963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.579790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.579835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.579846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.583697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.583743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.583753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.587572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.587616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.587627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.591524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.591569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.591580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.595382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.595427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.595438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.599309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.599353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.599364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.603274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.603330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.603342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.607131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.607175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.607186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.611123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.611168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.611179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.615039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.615100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.615111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.619069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.619130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.619157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.623049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.623109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.623120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.073 [2024-11-19 16:16:46.626978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.073 [2024-11-19 16:16:46.627040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.073 [2024-11-19 16:16:46.627051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.631030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.631078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.631104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.634928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.634960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.634970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.638838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.638869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.638881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.642604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.642634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.642645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.646326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.646352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.646363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.650056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.650102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.650114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.653997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.654043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.654054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.657879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.657925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.657935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.661764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.661809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.661821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.665717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.665761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.665772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.669621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.669666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.669677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.673421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.673466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.673477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.677299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.677344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.677354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.681169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.681214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.681225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.685048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.685093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.685104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.689072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.689117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.689128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.693081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.693126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.693136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.697055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.697100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.697111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.700949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.700994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.701004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.704933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.704978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.704990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.708824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.708869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.708880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.712773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.712817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.712828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.716779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.716824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.716836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.720765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.720809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.720820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.724763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.724807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.724819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.728789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.728834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.074 [2024-11-19 16:16:46.728845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.074 [2024-11-19 16:16:46.732750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.074 [2024-11-19 16:16:46.732795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.732806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.075 [2024-11-19 16:16:46.736883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.736929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.736939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.075 [2024-11-19 16:16:46.740893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.740939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.740950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.075 [2024-11-19 16:16:46.744821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.744866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.744877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.075 [2024-11-19 16:16:46.748766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.748811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.748838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.075 [2024-11-19 16:16:46.752808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.752853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.752864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.075 [2024-11-19 16:16:46.756762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.756806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.756817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.075 [2024-11-19 16:16:46.760698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.760743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.760754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.075 [2024-11-19 16:16:46.764613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.764673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.764684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.075 7564.00 IOPS, 945.50 MiB/s [2024-11-19T16:16:46.790Z] [2024-11-19 16:16:46.770108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.770154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.770165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.075 [2024-11-19 16:16:46.773905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.773950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.773961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.075 [2024-11-19 16:16:46.777807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.777852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.777863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.075 [2024-11-19 16:16:46.782100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.075 [2024-11-19 16:16:46.782131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.075 [2024-11-19 16:16:46.782141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.786312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.786357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.786368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.790459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.790505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.790518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.794479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.794523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.794534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.798331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.798362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.798373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.802234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.802289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.802300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.806330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.806376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.806387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.810593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.810639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.810650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.814700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.814747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.814758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.818727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.818760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.818771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.822606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.822650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.822662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.826539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.826584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.826595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.830419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.830465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.830477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.834404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.834448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.834459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.838283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.838327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.336 [2024-11-19 16:16:46.838339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.336 [2024-11-19 16:16:46.842111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.336 [2024-11-19 16:16:46.842156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.842167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.845922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.845967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.845979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.849872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.849917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.849928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.853760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.853806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.853817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.857559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.857590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.857601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.861400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.861430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.861441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.865207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.865260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.865274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.869085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.869130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.869141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.873186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.873233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.873244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.876958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.877003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.877015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.880929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.880974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.880985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.884867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.884912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.884923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.888825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.888870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.888881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.892847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.892892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.892903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.896940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.896986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.896997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.900969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.901015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.901027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.904962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.905008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.905019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.908979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.909025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.909036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.912966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.913011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.913022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.916839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.916885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.916895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.920731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.920777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.920788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.924867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.924914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.924925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.929146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.929193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.929204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.933386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.933432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.933444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.937760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.937789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.937800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.942135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.942183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.942195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.946592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.337 [2024-11-19 16:16:46.946662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.337 [2024-11-19 16:16:46.946697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.337 [2024-11-19 16:16:46.951139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.951185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.951196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:46.955567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.955627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.955639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:46.959779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.959826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.959837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:46.964019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.964065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.964076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:46.968445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.968492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.968504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:46.972683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.972729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.972741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:46.976806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.976852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.976864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:46.980871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.980917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.980928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:46.985095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.985142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.985153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:46.989216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.989273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.989285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:46.993264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.993310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.993321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:46.997170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:46.997216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:46.997227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.001397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.001444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.001456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.005419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.005465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.005476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.009311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.009338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.009350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.013319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.013351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.013363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.017339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.017366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.017377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.021333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.021378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.021390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.025220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.025276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.025287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.029524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.029571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.029582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.033553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.033599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.033610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.037614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.037661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.037672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.041668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.041714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.041725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.338 [2024-11-19 16:16:47.046448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.338 [2024-11-19 16:16:47.046507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.338 [2024-11-19 16:16:47.046520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.050858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.050894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.050908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.055411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.055458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.055469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.059300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.059356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.059368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.063406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.063452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.063463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.067437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.067484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.067495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.071476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.071521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.071532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.075607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.075641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.075653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.079538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.079585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.079596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.083571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.083616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.083628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.087522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.087567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.087579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.091658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.091705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.091717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.095642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.095689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.095700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.099746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.099793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.099804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.103828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.103874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.600 [2024-11-19 16:16:47.103885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.600 [2024-11-19 16:16:47.108022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.600 [2024-11-19 16:16:47.108069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.108081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.112265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.112320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.112332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.116210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.116282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.116295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.120245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.120298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.120310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.124105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.124152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.124162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.128144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.128190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.128201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.132116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.132161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.132172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.136553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.136603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.136631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.140781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.140827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.140838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.144742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.144787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.144798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.148824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.148870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.148881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.152934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.152980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.152991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.156854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.156899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.156910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.160815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.160861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.160872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.164767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.164811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.164823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.168698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.168744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.168755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.172665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.172710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.172722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.176634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.176679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.176690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.180493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.180539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.180550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.184384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.184429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.184439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.188380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.188426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.188437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.192248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.192303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.192314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.196071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.196116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.196127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.200065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.200110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.200121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.204105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.204151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.204161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.208134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.208179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.208190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.601 [2024-11-19 16:16:47.212094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.601 [2024-11-19 16:16:47.212140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.601 [2024-11-19 16:16:47.212151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.215969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.216015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.216026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.219870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.219915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.219926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.223829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.223873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.223885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.227674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.227719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.227730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.231536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.231567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.231578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.235419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.235450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.235461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.239351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.239382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.239392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.243087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.243128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.243138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.247040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.247070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.247081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.251031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.251076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.251103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.254991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.255052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.255078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.258943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.258974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.258986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.262833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.262865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.262876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.266761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.266791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.266803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.270659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.270726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.270738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.274948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.275028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.275054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.279397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.279444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.279456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.283928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.283975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.284003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.288547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.288596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.288639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.293381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.293435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.293448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.298118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.298166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.298178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.302939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.302977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.302990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.602 [2024-11-19 16:16:47.307626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.602 [2024-11-19 16:16:47.307657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.602 [2024-11-19 16:16:47.307668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.864 [2024-11-19 16:16:47.312561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.864 [2024-11-19 16:16:47.312609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.864 [2024-11-19 16:16:47.312650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.864 [2024-11-19 16:16:47.316826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.864 [2024-11-19 16:16:47.316874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.864 [2024-11-19 16:16:47.316902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.864 [2024-11-19 16:16:47.321048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.864 [2024-11-19 16:16:47.321094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.864 [2024-11-19 16:16:47.321104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.864 [2024-11-19 16:16:47.325074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.864 [2024-11-19 16:16:47.325119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.864 [2024-11-19 16:16:47.325130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.864 [2024-11-19 16:16:47.329164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.864 [2024-11-19 16:16:47.329210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.864 [2024-11-19 16:16:47.329221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.864 [2024-11-19 16:16:47.333159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.864 [2024-11-19 16:16:47.333190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.864 [2024-11-19 16:16:47.333200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.864 [2024-11-19 16:16:47.337169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.864 [2024-11-19 16:16:47.337214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.864 [2024-11-19 16:16:47.337225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.864 [2024-11-19 16:16:47.341144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.864 [2024-11-19 16:16:47.341190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.864 [2024-11-19 16:16:47.341200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.864 [2024-11-19 16:16:47.345179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.864 [2024-11-19 16:16:47.345225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.864 [2024-11-19 16:16:47.345235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.349064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.349109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.349119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.353068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.353113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.353124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.357055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.357100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.357111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.361083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.361129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.361141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.365061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.365106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.365117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.369094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.369139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.369149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.373093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.373139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.373150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.377041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.377086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.377097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.380963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.381008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.381019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.384947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.384993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.385004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.388903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.388949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.388960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.392824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.392869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.392880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.396694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.396738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.396750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.400698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.400743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.400754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.404615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.404661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.404672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.408468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.408514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.408525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.412523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.412569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.412580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.416290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.416335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.416346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.420143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.420188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.420198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.424099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.424144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.424155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.428062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.428106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.428117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.432044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.432090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.432101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.436010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.436056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.436066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.439967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.440012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.440023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.443919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.443964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.443975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.447896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.447940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.865 [2024-11-19 16:16:47.447951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.865 [2024-11-19 16:16:47.451915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.865 [2024-11-19 16:16:47.451960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.451971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.455868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.455914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.455925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.459728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.459772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.459783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.463577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.463607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.463618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.467365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.467395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.467406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.471193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.471238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.471249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.475124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.475169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.475180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.479098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.479143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.479154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.483093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.483134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.483146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.487086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.487131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.487143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.491075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.491121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.491132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.494984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.495046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.495073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.498875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.498907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.498918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.502877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.502908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.502919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.506828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.506860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.506872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.510936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.510967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.510977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.514862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.514894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.514906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.518625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.518675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.518717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.522403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.522447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.522458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.526267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.526312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.526323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.530095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.530140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.530150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.534052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.534097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.534108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.537944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.537989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.537999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.541712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.541757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.541768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.545603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.545633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.545643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.549423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.549454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.549465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.553223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.866 [2024-11-19 16:16:47.553277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.866 [2024-11-19 16:16:47.553288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.866 [2024-11-19 16:16:47.557087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.867 [2024-11-19 16:16:47.557133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.867 [2024-11-19 16:16:47.557144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:40.867 [2024-11-19 16:16:47.560997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.867 [2024-11-19 16:16:47.561042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.867 [2024-11-19 16:16:47.561053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:40.867 [2024-11-19 16:16:47.564886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.867 [2024-11-19 16:16:47.564930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.867 [2024-11-19 16:16:47.564941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:40.867 [2024-11-19 16:16:47.568850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.867 [2024-11-19 16:16:47.568896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.867 [2024-11-19 16:16:47.568907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:40.867 [2024-11-19 16:16:47.573314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:40.867 [2024-11-19 16:16:47.573386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:40.867 [2024-11-19 16:16:47.573398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.127 [2024-11-19 16:16:47.577559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.127 [2024-11-19 16:16:47.577604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.127 [2024-11-19 16:16:47.577615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.127 [2024-11-19 16:16:47.581594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.127 [2024-11-19 16:16:47.581639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.127 [2024-11-19 16:16:47.581650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.127 [2024-11-19 16:16:47.585781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.127 [2024-11-19 16:16:47.585829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.127 [2024-11-19 16:16:47.585840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.127 [2024-11-19 16:16:47.589767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.127 [2024-11-19 16:16:47.589812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.127 [2024-11-19 16:16:47.589823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.593721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.593766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.593777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.597604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.597650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.597661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.601516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.601560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.601571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.605438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.605484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.605495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.609409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.609453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.609464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.613376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.613422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.613433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.617330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.617375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.617385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.621231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.621285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.621296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.625158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.625203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.625214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.629059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.629105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.629116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.633022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.633068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.633079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.637021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.637067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.637078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.640983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.641029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.641040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.645012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.645058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.645069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.648906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.648952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.648962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.652925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.652970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.652982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.656869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.656915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.656926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.660843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.660888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.660899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.664781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.664825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.664837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.668729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.668774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.668785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.672771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.672816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.672827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.676749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.676793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.676804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.680692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.680737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.680749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.684593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.684637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.684648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.688439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.688483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.688494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.128 [2024-11-19 16:16:47.692370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.128 [2024-11-19 16:16:47.692416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.128 [2024-11-19 16:16:47.692427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.696319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.696364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.696374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.700240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.700298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.700309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.704173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.704217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.704229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.708130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.708175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.708186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.712204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.712249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.712272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.716103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.716148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.716159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.720162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.720208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.720219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.724097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.724142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.724153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.728127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.728172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.728183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.732074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.732120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.732131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.736069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.736116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.736127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.740137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.740183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.740194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.744182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.744227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.744238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.748064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.748109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.748120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.752004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.752049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.752061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.755866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.755912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.755923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.759758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.759803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.759813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.763599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.763645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.763656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:41.129 [2024-11-19 16:16:47.767393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1380650) 00:21:41.129 [2024-11-19 16:16:47.767437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.129 [2024-11-19 16:16:47.767448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:41.129 7633.50 IOPS, 954.19 MiB/s 00:21:41.129 Latency(us) 00:21:41.129 [2024-11-19T16:16:47.844Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.129 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:41.129 nvme0n1 : 2.00 7631.10 953.89 0.00 0.00 2093.68 1720.32 9413.35 00:21:41.129 [2024-11-19T16:16:47.844Z] =================================================================================================================== 00:21:41.129 [2024-11-19T16:16:47.844Z] Total : 7631.10 953.89 0.00 0.00 2093.68 1720.32 9413.35 00:21:41.129 { 00:21:41.129 "results": [ 00:21:41.129 { 00:21:41.129 "job": "nvme0n1", 00:21:41.129 "core_mask": "0x2", 00:21:41.129 "workload": "randread", 00:21:41.129 "status": "finished", 00:21:41.129 "queue_depth": 16, 00:21:41.129 "io_size": 131072, 00:21:41.129 "runtime": 2.002726, 00:21:41.129 "iops": 7631.09881231881, 00:21:41.129 "mibps": 953.8873515398512, 00:21:41.129 "io_failed": 0, 00:21:41.129 "io_timeout": 0, 00:21:41.129 "avg_latency_us": 2093.6757614223766, 00:21:41.129 "min_latency_us": 1720.32, 00:21:41.129 "max_latency_us": 9413.352727272728 00:21:41.129 } 00:21:41.129 ], 00:21:41.129 "core_count": 1 00:21:41.129 } 00:21:41.129 16:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:41.129 16:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:41.129 | .driver_specific 00:21:41.129 | .nvme_error 00:21:41.129 | .status_code 00:21:41.129 | .command_transient_transport_error' 00:21:41.129 16:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:41.129 16:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 493 > 0 )) 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95865 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95865 ']' 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95865 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95865 00:21:41.389 killing process with pid 95865 00:21:41.389 Received shutdown signal, test time was about 2.000000 seconds 00:21:41.389 00:21:41.389 Latency(us) 00:21:41.389 [2024-11-19T16:16:48.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.389 [2024-11-19T16:16:48.104Z] =================================================================================================================== 00:21:41.389 [2024-11-19T16:16:48.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95865' 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95865 00:21:41.389 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95865 00:21:41.648 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:41.648 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:41.648 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:41.648 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:41.649 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:41.649 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95912 00:21:41.649 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:41.649 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95912 /var/tmp/bperf.sock 00:21:41.649 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95912 ']' 00:21:41.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:41.649 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:41.649 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.649 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:41.649 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.649 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:41.649 [2024-11-19 16:16:48.241883] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:21:41.649 [2024-11-19 16:16:48.241981] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95912 ] 00:21:41.907 [2024-11-19 16:16:48.388247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.907 [2024-11-19 16:16:48.407521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.907 [2024-11-19 16:16:48.436037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:41.907 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.907 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:41.907 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:41.907 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:42.166 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:42.166 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.166 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:42.166 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.166 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:42.166 16:16:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:42.426 nvme0n1 00:21:42.426 16:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:42.426 16:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.426 16:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:42.426 16:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.426 16:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:42.426 16:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:42.689 Running I/O for 2 seconds... 00:21:42.689 [2024-11-19 16:16:49.215342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f7100 00:21:42.689 [2024-11-19 16:16:49.216794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.216837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.229034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f7970 00:21:42.689 [2024-11-19 16:16:49.230552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.230587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.242654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f81e0 00:21:42.689 [2024-11-19 16:16:49.244142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.244176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.256832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f8a50 00:21:42.689 [2024-11-19 16:16:49.258293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.258353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.270634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f92c0 00:21:42.689 [2024-11-19 16:16:49.272180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.272210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.284077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f9b30 00:21:42.689 [2024-11-19 16:16:49.285520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.285553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.297495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fa3a0 00:21:42.689 [2024-11-19 16:16:49.299034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:7091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.299084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.312704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fac10 00:21:42.689 [2024-11-19 16:16:49.314315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.314396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.329556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fb480 00:21:42.689 [2024-11-19 16:16:49.331198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.331233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.346358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fbcf0 00:21:42.689 [2024-11-19 16:16:49.347899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.347934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.361930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fc560 00:21:42.689 [2024-11-19 16:16:49.363330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.363363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.376069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fcdd0 00:21:42.689 [2024-11-19 16:16:49.377484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.689 [2024-11-19 16:16:49.377519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:42.689 [2024-11-19 16:16:49.390138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fd640 00:21:42.690 [2024-11-19 16:16:49.391538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.690 [2024-11-19 16:16:49.391576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.405317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fdeb0 00:21:42.948 [2024-11-19 16:16:49.406620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.406839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.419878] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fe720 00:21:42.948 [2024-11-19 16:16:49.421145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.421340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.434214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ff3c8 00:21:42.948 [2024-11-19 16:16:49.435532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.435567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.454344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ff3c8 00:21:42.948 [2024-11-19 16:16:49.456642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.456677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.468465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fe720 00:21:42.948 [2024-11-19 16:16:49.470861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.471060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.482887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fdeb0 00:21:42.948 [2024-11-19 16:16:49.485285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.485467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.497128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fd640 00:21:42.948 [2024-11-19 16:16:49.499413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.499445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.510578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fcdd0 00:21:42.948 [2024-11-19 16:16:49.512710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.512743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.524132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fc560 00:21:42.948 [2024-11-19 16:16:49.526355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.526388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.537525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fbcf0 00:21:42.948 [2024-11-19 16:16:49.539770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.539801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.551108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fb480 00:21:42.948 [2024-11-19 16:16:49.553274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.553302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.564568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fac10 00:21:42.948 [2024-11-19 16:16:49.566639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.566678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.577793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fa3a0 00:21:42.948 [2024-11-19 16:16:49.580032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.580059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.591369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f9b30 00:21:42.948 [2024-11-19 16:16:49.593375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.593405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.604620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f92c0 00:21:42.948 [2024-11-19 16:16:49.606655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.606721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.617898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f8a50 00:21:42.948 [2024-11-19 16:16:49.619940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.619971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.631409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f81e0 00:21:42.948 [2024-11-19 16:16:49.633514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.633694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.645222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f7970 00:21:42.948 [2024-11-19 16:16:49.647349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:42.948 [2024-11-19 16:16:49.647383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:42.948 [2024-11-19 16:16:49.659198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f7100 00:21:43.207 [2024-11-19 16:16:49.661513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.207 [2024-11-19 16:16:49.661544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:43.207 [2024-11-19 16:16:49.673389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f6890 00:21:43.207 [2024-11-19 16:16:49.675455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.207 [2024-11-19 16:16:49.675488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.207 [2024-11-19 16:16:49.686841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f6020 00:21:43.207 [2024-11-19 16:16:49.689005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:4322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.207 [2024-11-19 16:16:49.689038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:43.207 [2024-11-19 16:16:49.700572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f57b0 00:21:43.207 [2024-11-19 16:16:49.702448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.207 [2024-11-19 16:16:49.702479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:43.207 [2024-11-19 16:16:49.713865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f4f40 00:21:43.207 [2024-11-19 16:16:49.715892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.207 [2024-11-19 16:16:49.715923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:43.207 [2024-11-19 16:16:49.727448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f46d0 00:21:43.207 [2024-11-19 16:16:49.729372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.207 [2024-11-19 16:16:49.729405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:43.207 [2024-11-19 16:16:49.740844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f3e60 00:21:43.207 [2024-11-19 16:16:49.742903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.207 [2024-11-19 16:16:49.742935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:43.207 [2024-11-19 16:16:49.755528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f35f0 00:21:43.207 [2024-11-19 16:16:49.757373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.207 [2024-11-19 16:16:49.757405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:43.207 [2024-11-19 16:16:49.768963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f2d80 00:21:43.207 [2024-11-19 16:16:49.771054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.208 [2024-11-19 16:16:49.771088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:43.208 [2024-11-19 16:16:49.782613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f2510 00:21:43.208 [2024-11-19 16:16:49.784661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.208 [2024-11-19 16:16:49.784693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:43.208 [2024-11-19 16:16:49.796277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f1ca0 00:21:43.208 [2024-11-19 16:16:49.798058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.208 [2024-11-19 16:16:49.798089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:43.208 [2024-11-19 16:16:49.809620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f1430 00:21:43.208 [2024-11-19 16:16:49.811503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.208 [2024-11-19 16:16:49.811533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:43.208 [2024-11-19 16:16:49.823169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f0bc0 00:21:43.208 [2024-11-19 16:16:49.825071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.208 [2024-11-19 16:16:49.825098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:43.208 [2024-11-19 16:16:49.836665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f0350 00:21:43.208 [2024-11-19 16:16:49.838463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.208 [2024-11-19 16:16:49.838496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:43.208 [2024-11-19 16:16:49.849867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166efae0 00:21:43.208 [2024-11-19 16:16:49.851913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.208 [2024-11-19 16:16:49.851946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:43.208 [2024-11-19 16:16:49.863678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ef270 00:21:43.208 [2024-11-19 16:16:49.865388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.208 [2024-11-19 16:16:49.865421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:43.208 [2024-11-19 16:16:49.877155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166eea00 00:21:43.208 [2024-11-19 16:16:49.879003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.208 [2024-11-19 16:16:49.879067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:43.208 [2024-11-19 16:16:49.890777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ee190 00:21:43.208 [2024-11-19 16:16:49.892490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.208 [2024-11-19 16:16:49.892523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.208 [2024-11-19 16:16:49.904116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ed920 00:21:43.208 [2024-11-19 16:16:49.905883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.208 [2024-11-19 16:16:49.905913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:43.208 [2024-11-19 16:16:49.918032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ed0b0 00:21:43.467 [2024-11-19 16:16:49.919972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:18436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:49.920160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:49.932696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ec840 00:21:43.467 [2024-11-19 16:16:49.934327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:49.934359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:49.946174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ebfd0 00:21:43.467 [2024-11-19 16:16:49.947857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:49.947888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:49.959719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166eb760 00:21:43.467 [2024-11-19 16:16:49.961334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:49.961365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:49.973061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166eaef0 00:21:43.467 [2024-11-19 16:16:49.974780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:49.974809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:49.986532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ea680 00:21:43.467 [2024-11-19 16:16:49.988388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:49.988419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:50.000133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e9e10 00:21:43.467 [2024-11-19 16:16:50.001834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:50.001864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:50.017165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e95a0 00:21:43.467 [2024-11-19 16:16:50.019019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:50.019091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:50.034559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e8d30 00:21:43.467 [2024-11-19 16:16:50.036332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:50.036372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:50.049422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e84c0 00:21:43.467 [2024-11-19 16:16:50.051113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:50.051319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:50.063673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e7c50 00:21:43.467 [2024-11-19 16:16:50.065181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:50.065214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:50.077385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e73e0 00:21:43.467 [2024-11-19 16:16:50.078934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.467 [2024-11-19 16:16:50.078972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:43.467 [2024-11-19 16:16:50.091134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e6b70 00:21:43.467 [2024-11-19 16:16:50.092705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.468 [2024-11-19 16:16:50.092732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:43.468 [2024-11-19 16:16:50.104695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e6300 00:21:43.468 [2024-11-19 16:16:50.106138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.468 [2024-11-19 16:16:50.106170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:43.468 [2024-11-19 16:16:50.118170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e5a90 00:21:43.468 [2024-11-19 16:16:50.119685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:7267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.468 [2024-11-19 16:16:50.119715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.468 [2024-11-19 16:16:50.131735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e5220 00:21:43.468 [2024-11-19 16:16:50.133143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.468 [2024-11-19 16:16:50.133175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:43.468 [2024-11-19 16:16:50.145196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e49b0 00:21:43.468 [2024-11-19 16:16:50.146624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.468 [2024-11-19 16:16:50.146654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:43.468 [2024-11-19 16:16:50.158657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e4140 00:21:43.468 [2024-11-19 16:16:50.160096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.468 [2024-11-19 16:16:50.160127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:43.468 [2024-11-19 16:16:50.172152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e38d0 00:21:43.468 [2024-11-19 16:16:50.173576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.468 [2024-11-19 16:16:50.173608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.186981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e3060 00:21:43.727 [2024-11-19 16:16:50.188479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.188507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.200564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e27f0 00:21:43.727 [2024-11-19 16:16:50.201918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.201949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:43.727 18091.00 IOPS, 70.67 MiB/s [2024-11-19T16:16:50.442Z] [2024-11-19 16:16:50.215705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e1f80 00:21:43.727 [2024-11-19 16:16:50.217188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.217420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.229667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e1710 00:21:43.727 [2024-11-19 16:16:50.231196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.231409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.244294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e0ea0 00:21:43.727 [2024-11-19 16:16:50.245830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:12095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.246014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.258426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e0630 00:21:43.727 [2024-11-19 16:16:50.259947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.259983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.272327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166dfdc0 00:21:43.727 [2024-11-19 16:16:50.273748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.273911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.286192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166df550 00:21:43.727 [2024-11-19 16:16:50.287616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:10747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.287815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.300086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166dece0 00:21:43.727 [2024-11-19 16:16:50.301509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.301695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.314212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166de470 00:21:43.727 [2024-11-19 16:16:50.315587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.315787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.333762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ddc00 00:21:43.727 [2024-11-19 16:16:50.336431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.336629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.349934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166de470 00:21:43.727 [2024-11-19 16:16:50.352714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.727 [2024-11-19 16:16:50.352889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:21:43.727 [2024-11-19 16:16:50.366848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166dece0 00:21:43.727 [2024-11-19 16:16:50.369489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.728 [2024-11-19 16:16:50.369735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:21:43.728 [2024-11-19 16:16:50.382402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166df550 00:21:43.728 [2024-11-19 16:16:50.384718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.728 [2024-11-19 16:16:50.384902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:21:43.728 [2024-11-19 16:16:50.396131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166dfdc0 00:21:43.728 [2024-11-19 16:16:50.398328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.728 [2024-11-19 16:16:50.398512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:21:43.728 [2024-11-19 16:16:50.409805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e0630 00:21:43.728 [2024-11-19 16:16:50.411990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.728 [2024-11-19 16:16:50.412023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:21:43.728 [2024-11-19 16:16:50.423351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e0ea0 00:21:43.728 [2024-11-19 16:16:50.425410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.728 [2024-11-19 16:16:50.425441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:21:43.728 [2024-11-19 16:16:50.437011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e1710 00:21:43.987 [2024-11-19 16:16:50.439630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:22677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.439666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.451657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e1f80 00:21:43.987 [2024-11-19 16:16:50.453702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.453733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.465128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e27f0 00:21:43.987 [2024-11-19 16:16:50.467376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.467402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.478704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e3060 00:21:43.987 [2024-11-19 16:16:50.480756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.480787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.492905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e38d0 00:21:43.987 [2024-11-19 16:16:50.495321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.495369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.508663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e4140 00:21:43.987 [2024-11-19 16:16:50.511424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.511461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.524155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e49b0 00:21:43.987 [2024-11-19 16:16:50.526420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.526450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.538747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e5220 00:21:43.987 [2024-11-19 16:16:50.540869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:9825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.540904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.552888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e5a90 00:21:43.987 [2024-11-19 16:16:50.555117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.555151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.567041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e6300 00:21:43.987 [2024-11-19 16:16:50.569168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.569201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.581419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e6b70 00:21:43.987 [2024-11-19 16:16:50.583550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.583588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.595725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e73e0 00:21:43.987 [2024-11-19 16:16:50.597702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.597736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.609748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e7c50 00:21:43.987 [2024-11-19 16:16:50.611813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.611846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.624093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e84c0 00:21:43.987 [2024-11-19 16:16:50.626083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.626118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.638155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e8d30 00:21:43.987 [2024-11-19 16:16:50.640204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:6387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-11-19 16:16:50.640246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:21:43.987 [2024-11-19 16:16:50.652901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e95a0 00:21:43.988 [2024-11-19 16:16:50.654918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.988 [2024-11-19 16:16:50.654968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:21:43.988 [2024-11-19 16:16:50.667261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166e9e10 00:21:43.988 [2024-11-19 16:16:50.669168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.988 [2024-11-19 16:16:50.669202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:21:43.988 [2024-11-19 16:16:50.681435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ea680 00:21:43.988 [2024-11-19 16:16:50.683385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.988 [2024-11-19 16:16:50.683422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:21:43.988 [2024-11-19 16:16:50.696061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166eaef0 00:21:43.988 [2024-11-19 16:16:50.698346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.247 [2024-11-19 16:16:50.698554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:21:44.247 [2024-11-19 16:16:50.711885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166eb760 00:21:44.247 [2024-11-19 16:16:50.713737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:18771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.247 [2024-11-19 16:16:50.713771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:44.247 [2024-11-19 16:16:50.726051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ebfd0 00:21:44.247 [2024-11-19 16:16:50.728155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.247 [2024-11-19 16:16:50.728188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:44.247 [2024-11-19 16:16:50.740331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ec840 00:21:44.247 [2024-11-19 16:16:50.742080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.247 [2024-11-19 16:16:50.742111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:21:44.247 [2024-11-19 16:16:50.753717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ed0b0 00:21:44.247 [2024-11-19 16:16:50.755499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.247 [2024-11-19 16:16:50.755530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:44.247 [2024-11-19 16:16:50.767125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ed920 00:21:44.247 [2024-11-19 16:16:50.768929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.247 [2024-11-19 16:16:50.768956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:44.247 [2024-11-19 16:16:50.780655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ee190 00:21:44.248 [2024-11-19 16:16:50.782377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.782562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.794103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166eea00 00:21:44.248 [2024-11-19 16:16:50.795967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.795995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.807691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166ef270 00:21:44.248 [2024-11-19 16:16:50.809354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.809386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.821093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166efae0 00:21:44.248 [2024-11-19 16:16:50.822855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.822889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.834553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f0350 00:21:44.248 [2024-11-19 16:16:50.836487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:7670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.836519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.848264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f0bc0 00:21:44.248 [2024-11-19 16:16:50.849887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.849918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.861615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f1430 00:21:44.248 [2024-11-19 16:16:50.863336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.863366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.875343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f1ca0 00:21:44.248 [2024-11-19 16:16:50.876927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.876957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.888706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f2510 00:21:44.248 [2024-11-19 16:16:50.890294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.890351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.901989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f2d80 00:21:44.248 [2024-11-19 16:16:50.903687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:9298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.903716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.915541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f35f0 00:21:44.248 [2024-11-19 16:16:50.917084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.917115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.929030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f3e60 00:21:44.248 [2024-11-19 16:16:50.930620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.930651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.942494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f46d0 00:21:44.248 [2024-11-19 16:16:50.944336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.944373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:21:44.248 [2024-11-19 16:16:50.956409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f4f40 00:21:44.248 [2024-11-19 16:16:50.958033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.248 [2024-11-19 16:16:50.958065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:50.970736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f57b0 00:21:44.508 [2024-11-19 16:16:50.972316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:50.972348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:50.984343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f6020 00:21:44.508 [2024-11-19 16:16:50.985803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:50.985835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:50.997827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f6890 00:21:44.508 [2024-11-19 16:16:50.999383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:50.999415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.011274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f7100 00:21:44.508 [2024-11-19 16:16:51.012842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.012869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.024867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f7970 00:21:44.508 [2024-11-19 16:16:51.026310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:9791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.026505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.038372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f81e0 00:21:44.508 [2024-11-19 16:16:51.039832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.039864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.052277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f8a50 00:21:44.508 [2024-11-19 16:16:51.053709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.053741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.065754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f92c0 00:21:44.508 [2024-11-19 16:16:51.067217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.067425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.079714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166f9b30 00:21:44.508 [2024-11-19 16:16:51.081081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.081114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.093229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fa3a0 00:21:44.508 [2024-11-19 16:16:51.094622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.094653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.106810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fac10 00:21:44.508 [2024-11-19 16:16:51.108191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.108222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.120332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fb480 00:21:44.508 [2024-11-19 16:16:51.121750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.121777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.134026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fbcf0 00:21:44.508 [2024-11-19 16:16:51.135419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.135452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.147704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fc560 00:21:44.508 [2024-11-19 16:16:51.148998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.149030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.161185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fcdd0 00:21:44.508 [2024-11-19 16:16:51.162495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.162526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.174491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fd640 00:21:44.508 [2024-11-19 16:16:51.175764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.175796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.187929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fdeb0 00:21:44.508 [2024-11-19 16:16:51.189176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.189384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:21:44.508 [2024-11-19 16:16:51.201493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2287370) with pdu=0x2000166fe720 00:21:44.508 [2024-11-19 16:16:51.202787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.508 [2024-11-19 16:16:51.202823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:21:44.508 18090.50 IOPS, 70.67 MiB/s 00:21:44.508 Latency(us) 00:21:44.508 [2024-11-19T16:16:51.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.508 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:44.508 nvme0n1 : 2.01 18074.21 70.60 0.00 0.00 7076.34 2487.39 29312.47 00:21:44.508 [2024-11-19T16:16:51.223Z] =================================================================================================================== 00:21:44.508 [2024-11-19T16:16:51.223Z] Total : 18074.21 70.60 0.00 0.00 7076.34 2487.39 29312.47 00:21:44.508 { 00:21:44.508 "results": [ 00:21:44.508 { 00:21:44.508 "job": "nvme0n1", 00:21:44.508 "core_mask": "0x2", 00:21:44.508 "workload": "randwrite", 00:21:44.508 "status": "finished", 00:21:44.508 "queue_depth": 128, 00:21:44.508 "io_size": 4096, 00:21:44.508 "runtime": 2.008884, 00:21:44.508 "iops": 18074.214339902155, 00:21:44.508 "mibps": 70.6023997652428, 00:21:44.508 "io_failed": 0, 00:21:44.508 "io_timeout": 0, 00:21:44.508 "avg_latency_us": 7076.337259131845, 00:21:44.508 "min_latency_us": 2487.389090909091, 00:21:44.508 "max_latency_us": 29312.465454545454 00:21:44.508 } 00:21:44.508 ], 00:21:44.508 "core_count": 1 00:21:44.508 } 00:21:44.767 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:44.767 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:44.767 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:44.767 | .driver_specific 00:21:44.767 | .nvme_error 00:21:44.767 | .status_code 00:21:44.767 | .command_transient_transport_error' 00:21:44.767 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 142 > 0 )) 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95912 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95912 ']' 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95912 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95912 00:21:45.027 killing process with pid 95912 00:21:45.027 Received shutdown signal, test time was about 2.000000 seconds 00:21:45.027 00:21:45.027 Latency(us) 00:21:45.027 [2024-11-19T16:16:51.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.027 [2024-11-19T16:16:51.742Z] =================================================================================================================== 00:21:45.027 [2024-11-19T16:16:51.742Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95912' 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95912 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95912 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95961 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95961 /var/tmp/bperf.sock 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 95961 ']' 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:45.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.027 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:45.027 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:45.027 Zero copy mechanism will not be used. 00:21:45.027 [2024-11-19 16:16:51.719259] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:21:45.027 [2024-11-19 16:16:51.719394] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95961 ] 00:21:45.287 [2024-11-19 16:16:51.862485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.287 [2024-11-19 16:16:51.882922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.287 [2024-11-19 16:16:51.911869] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:45.287 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.287 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:21:45.287 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:45.287 16:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:45.546 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:45.546 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.546 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:45.546 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.546 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:45.546 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:46.115 nvme0n1 00:21:46.115 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:46.115 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.115 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:46.115 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.115 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:46.115 16:16:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:46.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:46.115 Zero copy mechanism will not be used. 00:21:46.115 Running I/O for 2 seconds... 00:21:46.115 [2024-11-19 16:16:52.692855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.115 [2024-11-19 16:16:52.692949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.115 [2024-11-19 16:16:52.692977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.115 [2024-11-19 16:16:52.698346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.115 [2024-11-19 16:16:52.698465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.115 [2024-11-19 16:16:52.698488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.115 [2024-11-19 16:16:52.703093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.115 [2024-11-19 16:16:52.703225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.115 [2024-11-19 16:16:52.703247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.115 [2024-11-19 16:16:52.707787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.115 [2024-11-19 16:16:52.707919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.115 [2024-11-19 16:16:52.707941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.115 [2024-11-19 16:16:52.712343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.115 [2024-11-19 16:16:52.712473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.115 [2024-11-19 16:16:52.712495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.115 [2024-11-19 16:16:52.716882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.115 [2024-11-19 16:16:52.717016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.717053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.721566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.721664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.721685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.726225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.726537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.726859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.731246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.731390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.731412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.735765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.735888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.735910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.740385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.740489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.740511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.745010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.745145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.745166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.749671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.749772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.749794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.754285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.754419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.754440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.758758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.758878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.758899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.763438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.763531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.763551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.767939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.768074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.768096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.772592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.772721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.772742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.777166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.777313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.777334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.781667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.781928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.781949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.786593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.786735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.786757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.791229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.791348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.791370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.795910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.796012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.796033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.800500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.800631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.800652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.805010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.805244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.805267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.809794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.809925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.809946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.814416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.814516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.814536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.819104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.819206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.819227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.116 [2024-11-19 16:16:52.824022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.116 [2024-11-19 16:16:52.824156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.116 [2024-11-19 16:16:52.824177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.377 [2024-11-19 16:16:52.829227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.377 [2024-11-19 16:16:52.829480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.377 [2024-11-19 16:16:52.829502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.377 [2024-11-19 16:16:52.834502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.377 [2024-11-19 16:16:52.834601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.377 [2024-11-19 16:16:52.834622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.377 [2024-11-19 16:16:52.839192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.377 [2024-11-19 16:16:52.839321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.377 [2024-11-19 16:16:52.839343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.377 [2024-11-19 16:16:52.843914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.377 [2024-11-19 16:16:52.844057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.377 [2024-11-19 16:16:52.844077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.377 [2024-11-19 16:16:52.848487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.377 [2024-11-19 16:16:52.848615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.377 [2024-11-19 16:16:52.848636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.377 [2024-11-19 16:16:52.853027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.377 [2024-11-19 16:16:52.853278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.377 [2024-11-19 16:16:52.853301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.377 [2024-11-19 16:16:52.857916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.377 [2024-11-19 16:16:52.858016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.377 [2024-11-19 16:16:52.858037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.377 [2024-11-19 16:16:52.862371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.377 [2024-11-19 16:16:52.862502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.377 [2024-11-19 16:16:52.862523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.866855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.866945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.866966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.871598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.871734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.871756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.876231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.876384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.876405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.880826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.880928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.880948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.885859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.886000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.886020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.891006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.891360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.891611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.896225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.896525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.896549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.901867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.901986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.902009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.907198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.907332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.907370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.912382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.912521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.912544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.917460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.917563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.917584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.922346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.922484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.922506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.927288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.927495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.927519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.932206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.932506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.932530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.937359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.937644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.937816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.942360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.942621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.942870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.947543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.947811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.947972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.952492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.952807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.953028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.957376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.957645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.957795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.962270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.962558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.962743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.967356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.967631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.967800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.972316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.972586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.972795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.977618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.977720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.977743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.982355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.982453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.982475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.986981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.987147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.987169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.991838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.378 [2024-11-19 16:16:52.992099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.378 [2024-11-19 16:16:52.992123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.378 [2024-11-19 16:16:52.996855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:52.996992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:52.997013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.001589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.001724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.001745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.006164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.006308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.006329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.010965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.011144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.011165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.015848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.016114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.016137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.020938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.021069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.021090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.025850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.025978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.026000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.030482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.030620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.030641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.035219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.035375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.035413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.040121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.040392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.040416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.045007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.045151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.045171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.049778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.049916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.049937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.054532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.054624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.054645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.059415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.059504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.059526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.064003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.064141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.064161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.068736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.068835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.068856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.073663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.073796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.073817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.078581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.078758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.078782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.379 [2024-11-19 16:16:53.083530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.379 [2024-11-19 16:16:53.083646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.379 [2024-11-19 16:16:53.083667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.640 [2024-11-19 16:16:53.088717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.640 [2024-11-19 16:16:53.088824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.640 [2024-11-19 16:16:53.088847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.640 [2024-11-19 16:16:53.093508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.640 [2024-11-19 16:16:53.093657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.640 [2024-11-19 16:16:53.093678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.640 [2024-11-19 16:16:53.098428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.640 [2024-11-19 16:16:53.098529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.640 [2024-11-19 16:16:53.098550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.640 [2024-11-19 16:16:53.103041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.640 [2024-11-19 16:16:53.103305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.640 [2024-11-19 16:16:53.103328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.640 [2024-11-19 16:16:53.107867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.640 [2024-11-19 16:16:53.107999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.640 [2024-11-19 16:16:53.108020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.640 [2024-11-19 16:16:53.112525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.640 [2024-11-19 16:16:53.112650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.640 [2024-11-19 16:16:53.112671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.640 [2024-11-19 16:16:53.117134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.640 [2024-11-19 16:16:53.117265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.640 [2024-11-19 16:16:53.117315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.640 [2024-11-19 16:16:53.121830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.640 [2024-11-19 16:16:53.121955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.640 [2024-11-19 16:16:53.121976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.126395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.126494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.126516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.130963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.131245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.131267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.135881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.136011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.136031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.140430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.140565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.140586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.144959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.145090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.145111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.149759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.149858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.149878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.154552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.154653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.154684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.159194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.159442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.159465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.163920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.164054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.164075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.168525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.168625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.168646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.173015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.173146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.173167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.177852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.178003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.178024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.182542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.182668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.182731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.187244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.187491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.187513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.192009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.192140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.192160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.196830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.197090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.197112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.201723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.201857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.201879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.206231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.206396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.206417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.210845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.210967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.210988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.215604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.215688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.215708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.220221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.220307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.220327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.224741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.224864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.224884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.229279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.229408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.229428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.233792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.233925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.233945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.238354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.238454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.238474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.243006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.641 [2024-11-19 16:16:53.243138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.641 [2024-11-19 16:16:53.243159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.641 [2024-11-19 16:16:53.247713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.247847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.247867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.252316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.252449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.252469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.256828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.256972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.256992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.261500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.261575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.261596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.266003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.266077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.266097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.270492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.270625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.270646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.274985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.275112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.275132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.279698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.279802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.279822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.284256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.284404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.284426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.288849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.288983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.289004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.293515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.293618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.293638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.298004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.298133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.298153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.302609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.302724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.302761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.307238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.307400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.307421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.311836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.311984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.312004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.316505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.316579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.316599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.321114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.321249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.321282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.325761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.325835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.325855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.330317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.330440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.330460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.334924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.335065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.335085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.339534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.339683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.339703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.344177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.344323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.344344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.642 [2024-11-19 16:16:53.349019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.642 [2024-11-19 16:16:53.349134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.642 [2024-11-19 16:16:53.349155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.903 [2024-11-19 16:16:53.354192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.903 [2024-11-19 16:16:53.354307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.903 [2024-11-19 16:16:53.354328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.903 [2024-11-19 16:16:53.359223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.903 [2024-11-19 16:16:53.359367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.903 [2024-11-19 16:16:53.359388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.903 [2024-11-19 16:16:53.363767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.903 [2024-11-19 16:16:53.363915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.903 [2024-11-19 16:16:53.363935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.903 [2024-11-19 16:16:53.368390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.903 [2024-11-19 16:16:53.368526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.903 [2024-11-19 16:16:53.368547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.903 [2024-11-19 16:16:53.372892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.903 [2024-11-19 16:16:53.373026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.903 [2024-11-19 16:16:53.373046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.903 [2024-11-19 16:16:53.377543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.903 [2024-11-19 16:16:53.377643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.903 [2024-11-19 16:16:53.377663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.903 [2024-11-19 16:16:53.382013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.903 [2024-11-19 16:16:53.382147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.903 [2024-11-19 16:16:53.382167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.903 [2024-11-19 16:16:53.386460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.903 [2024-11-19 16:16:53.386531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.903 [2024-11-19 16:16:53.386552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.903 [2024-11-19 16:16:53.391091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.903 [2024-11-19 16:16:53.391189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.903 [2024-11-19 16:16:53.391210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.903 [2024-11-19 16:16:53.395756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.395889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.395909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.400284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.400435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.400455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.404760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.404893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.404913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.409740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.409845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.409866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.414804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.414875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.414898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.420404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.420615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.420654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.425932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.426034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.426055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.431203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.431379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.431403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.436404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.436551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.436575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.441515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.441683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.441703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.446587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.446764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.446787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.451730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.451821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.451841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.456515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.456650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.456686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.461019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.461168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.461188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.465754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.465898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.465919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.470281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.470415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.470436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.474840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.474961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.474982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.479292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.479421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.479441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.483743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.483878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.483899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.488273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.488404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.488424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.492739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.492871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.492892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.497437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.497566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.497586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.501890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.502024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.502044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.506406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.506540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.506560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.510947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.511045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.511081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.515577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.515651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.904 [2024-11-19 16:16:53.515671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.904 [2024-11-19 16:16:53.520146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.904 [2024-11-19 16:16:53.520309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.520331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.524720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.524850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.524870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.529407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.529542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.529563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.533831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.533977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.533996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.538321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.538457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.538478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.542876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.543030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.543065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.547536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.547610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.547630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.552009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.552143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.552163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.556678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.556810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.556831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.561241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.561376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.561396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.565767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.565902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.565923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.570467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.570541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.570562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.574911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.575031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.575051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.579515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.579606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.579628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.583939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.584072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.584093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.588530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.588627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.588647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.593015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.593088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.593109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.597691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.597789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.597809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.602199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.602314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.602336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.606662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.606801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.606822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:46.905 [2024-11-19 16:16:53.611575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:46.905 [2024-11-19 16:16:53.611675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.905 [2024-11-19 16:16:53.611696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.616726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.616829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.166 [2024-11-19 16:16:53.616849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.621756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.621891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.166 [2024-11-19 16:16:53.621912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.626232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.626424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.166 [2024-11-19 16:16:53.626444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.630914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.631038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.166 [2024-11-19 16:16:53.631074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.635564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.635700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.166 [2024-11-19 16:16:53.635720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.640150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.640246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.166 [2024-11-19 16:16:53.640279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.644808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.644931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.166 [2024-11-19 16:16:53.644950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.649432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.649570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.166 [2024-11-19 16:16:53.649591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.653957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.654057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.166 [2024-11-19 16:16:53.654076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.658727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.658868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.166 [2024-11-19 16:16:53.658889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.663382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.663504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.166 [2024-11-19 16:16:53.663525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.166 [2024-11-19 16:16:53.668015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.166 [2024-11-19 16:16:53.668153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.668174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.672951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.673052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.673073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.677608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.677728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.677749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.682021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.682163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.682182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.167 6489.00 IOPS, 811.12 MiB/s [2024-11-19T16:16:53.882Z] [2024-11-19 16:16:53.687643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.687746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.687768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.692222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.692368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.692388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.696840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.696975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.696995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.701487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.701575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.701596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.706081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.706223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.706243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.710600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.710759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.710780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.715219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.715383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.715403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.719861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.719994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.720013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.724460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.724533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.724554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.729186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.729317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.729338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.733861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.733993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.734013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.738317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.738391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.738412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.742849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.742935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.742956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.747480] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.747614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.747634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.752093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.752238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.752288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.756732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.756874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.756894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.761409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.761506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.761526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.765916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.766013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.766033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.770479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.770581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.770602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.774965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.775071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.775092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.779588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.779726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.779746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.784112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.784245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.784277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.788705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.788841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.167 [2024-11-19 16:16:53.788862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.167 [2024-11-19 16:16:53.793216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.167 [2024-11-19 16:16:53.793382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.793402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.797778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.797907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.797928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.802277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.802413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.802433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.806778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.806946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.806968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.811364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.811501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.811521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.815868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.816001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.816021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.820538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.820674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.820695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.825107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.825241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.825275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.829703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.829835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.829855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.834322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.834402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.834423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.838734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.838859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.838879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.843335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.843471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.843491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.847891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.848013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.848033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.852517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.852655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.852676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.857024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.857158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.857177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.861661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.861735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.861755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.866182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.866330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.866350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.870796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.870920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.870940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.168 [2024-11-19 16:16:53.875801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.168 [2024-11-19 16:16:53.875969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.168 [2024-11-19 16:16:53.875991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.428 [2024-11-19 16:16:53.880831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.880967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.880987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.885722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.885856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.885876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.890265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.890401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.890421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.894775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.894890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.894911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.899488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.899585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.899606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.903981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.904054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.904074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.908581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.908714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.908734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.913225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.913341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.913361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.917834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.917967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.917987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.922464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.922562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.922581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.927021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.927174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.927195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.931675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.931772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.931793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.936243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.936392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.936413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.940690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.940822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.940842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.945369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.945443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.945463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.949943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.950084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.950103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.954633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.954768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.954788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.959094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.959191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.959211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.963780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.963914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.963935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.968370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.968470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.968489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.972909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.973040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.973061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.977485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.977577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.977599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.982009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.982142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.982162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.986524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.986658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.986702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.991209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.991347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.991368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:53.995712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:53.995847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.429 [2024-11-19 16:16:53.995867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.429 [2024-11-19 16:16:54.000262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.429 [2024-11-19 16:16:54.000408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.000429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.004718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.004850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.004870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.009201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.009331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.009352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.013675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.013805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.013825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.018327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.018475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.018495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.022834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.022960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.022980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.027345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.027490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.027511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.031786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.031920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.031939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.036265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.036398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.036418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.040764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.040896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.040916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.045468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.045542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.045562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.050075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.050181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.050202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.054867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.054961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.054983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.059519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.059600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.059621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.064154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.064305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.064326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.068665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.068799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.068818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.073382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.073510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.073532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.078315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.078453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.078474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.083351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.083477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.083498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.088116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.088237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.088258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.093265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.093432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.093455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.098588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.098800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.098842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.103908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.104044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.104065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.109058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.109280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.109303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.115107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.115242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.115276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.121552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.121681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.121702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.128681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.128798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.430 [2024-11-19 16:16:54.128820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.430 [2024-11-19 16:16:54.133355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.430 [2024-11-19 16:16:54.133492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.431 [2024-11-19 16:16:54.133513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.431 [2024-11-19 16:16:54.138606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.431 [2024-11-19 16:16:54.138716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.431 [2024-11-19 16:16:54.138739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.144035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.144166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.144188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.149067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.149199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.149220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.154033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.154173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.154194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.158892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.158987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.159041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.163979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.164112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.164148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.168898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.169038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.169059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.173696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.173838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.173858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.178463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.178565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.178586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.183833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.183971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.183992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.189793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.189914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.189937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.194755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.194881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.194903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.199639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.199744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.199766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.204712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.204845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.204866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.209516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.209595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.209616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.214176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.214360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.214381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.219047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.691 [2024-11-19 16:16:54.219210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.691 [2024-11-19 16:16:54.219231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.691 [2024-11-19 16:16:54.224064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.224200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.224221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.228852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.228987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.229008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.233573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.233709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.233729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.238419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.238571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.238592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.243187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.243320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.243353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.247922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.248056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.248077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.252701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.252840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.252860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.257509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.257614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.257636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.262167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.262317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.262337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.267068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.267224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.267245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.272051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.272177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.272198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.276843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.277004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.277025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.281529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.281665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.281685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.286490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.286576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.286597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.291191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.291310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.291331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.296012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.296147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.296168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.300638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.300771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.300792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.305361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.305499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.305519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.309831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.309965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.309985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.314425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.314485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.314506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.319041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.319208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.319228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.323721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.323856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.323877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.328509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.328613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.328634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.333007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.333139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.692 [2024-11-19 16:16:54.333159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.692 [2024-11-19 16:16:54.337644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.692 [2024-11-19 16:16:54.337777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.337797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.342196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.342357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.342378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.346804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.346926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.346946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.351549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.351624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.351645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.356057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.356179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.356198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.360637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.360771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.360791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.365135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.365294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.365314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.369726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.369873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.369893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.374182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.374352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.374373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.378904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.379058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.379094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.383604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.383727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.383747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.388077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.388200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.388220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.392718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.392848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.392868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.397326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.397461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.397482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.693 [2024-11-19 16:16:54.402239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.693 [2024-11-19 16:16:54.402412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.693 [2024-11-19 16:16:54.402434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.953 [2024-11-19 16:16:54.407029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.953 [2024-11-19 16:16:54.407164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.953 [2024-11-19 16:16:54.407184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.953 [2024-11-19 16:16:54.411826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.953 [2024-11-19 16:16:54.411961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.953 [2024-11-19 16:16:54.411980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.953 [2024-11-19 16:16:54.416385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.953 [2024-11-19 16:16:54.416535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.953 [2024-11-19 16:16:54.416556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.953 [2024-11-19 16:16:54.420961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.953 [2024-11-19 16:16:54.421095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.953 [2024-11-19 16:16:54.421115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.953 [2024-11-19 16:16:54.425747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.953 [2024-11-19 16:16:54.425880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.953 [2024-11-19 16:16:54.425901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.953 [2024-11-19 16:16:54.430584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.953 [2024-11-19 16:16:54.430794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.953 [2024-11-19 16:16:54.430817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.953 [2024-11-19 16:16:54.435753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.953 [2024-11-19 16:16:54.435858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.435879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.440988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.441120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.441140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.446504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.446648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.446695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.452030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.452131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.452153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.457166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.457364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.457387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.462221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.462411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.462434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.467194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.467315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.467349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.471890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.472022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.472041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.476652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.476776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.476796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.481195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.481342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.481362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.485749] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.485890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.485910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.490487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.490584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.490604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.494982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.495130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.495151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.499666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.499764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.499785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.504256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.504404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.504425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.508776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.508909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.508929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.513457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.513601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.513622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.517979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.518079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.518099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.522591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.522703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.522725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.527230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.527378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.527398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.531871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.532010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.532031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.536507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.536640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.536660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.541074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.541207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.541228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.545751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.545886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.545906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.550285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.550360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.550381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.554816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.554940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.554961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.559385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.954 [2024-11-19 16:16:54.559518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.954 [2024-11-19 16:16:54.559539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.954 [2024-11-19 16:16:54.563867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.564001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.564021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.568494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.568568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.568588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.573001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.573128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.573148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.577638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.577741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.577761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.582118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.582261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.582281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.586873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.586963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.586985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.591489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.591585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.591605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.596010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.596161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.596181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.600674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.600807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.600827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.605256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.605391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.605411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.609767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.609899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.609920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.614283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.614425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.614445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.618834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.618955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.618975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.623575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.623710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.623731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.628089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.628210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.628230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.632709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.632852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.632872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.637293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.637451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.637471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.641843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.641976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.641997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.646507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.646606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.646626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.651208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.651355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.651376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.655968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.656106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.656126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:47.955 [2024-11-19 16:16:54.661050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:47.955 [2024-11-19 16:16:54.661220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:47.955 [2024-11-19 16:16:54.661242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.215 [2024-11-19 16:16:54.666108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:48.215 [2024-11-19 16:16:54.666247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.215 [2024-11-19 16:16:54.666282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.215 [2024-11-19 16:16:54.671085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:48.215 [2024-11-19 16:16:54.671207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.215 [2024-11-19 16:16:54.671229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:21:48.215 [2024-11-19 16:16:54.675716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:48.215 [2024-11-19 16:16:54.675848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.215 [2024-11-19 16:16:54.675868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:21:48.215 [2024-11-19 16:16:54.680390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:48.215 [2024-11-19 16:16:54.680490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.215 [2024-11-19 16:16:54.680511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:21:48.215 6519.50 IOPS, 814.94 MiB/s [2024-11-19T16:16:54.930Z] [2024-11-19 16:16:54.685925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22876b0) with pdu=0x2000166ff3c8 00:21:48.215 [2024-11-19 16:16:54.686050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.215 [2024-11-19 16:16:54.686071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:48.215 00:21:48.215 Latency(us) 00:21:48.215 [2024-11-19T16:16:54.930Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.215 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:48.215 nvme0n1 : 2.00 6517.63 814.70 0.00 0.00 2449.21 1660.74 12809.31 00:21:48.215 [2024-11-19T16:16:54.930Z] =================================================================================================================== 00:21:48.215 [2024-11-19T16:16:54.930Z] Total : 6517.63 814.70 0.00 0.00 2449.21 1660.74 12809.31 00:21:48.215 { 00:21:48.215 "results": [ 00:21:48.215 { 00:21:48.215 "job": "nvme0n1", 00:21:48.215 "core_mask": "0x2", 00:21:48.215 "workload": "randwrite", 00:21:48.215 "status": "finished", 00:21:48.215 "queue_depth": 16, 00:21:48.215 "io_size": 131072, 00:21:48.215 "runtime": 2.004256, 00:21:48.215 "iops": 6517.630482333594, 00:21:48.215 "mibps": 814.7038102916993, 00:21:48.215 "io_failed": 0, 00:21:48.215 "io_timeout": 0, 00:21:48.215 "avg_latency_us": 2449.208634519427, 00:21:48.215 "min_latency_us": 1660.7418181818182, 00:21:48.215 "max_latency_us": 12809.309090909092 00:21:48.215 } 00:21:48.215 ], 00:21:48.215 "core_count": 1 00:21:48.215 } 00:21:48.215 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:48.215 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:48.215 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:48.215 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:48.215 | .driver_specific 00:21:48.215 | .nvme_error 00:21:48.215 | .status_code 00:21:48.215 | .command_transient_transport_error' 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 422 > 0 )) 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95961 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95961 ']' 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95961 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95961 00:21:48.475 killing process with pid 95961 00:21:48.475 Received shutdown signal, test time was about 2.000000 seconds 00:21:48.475 00:21:48.475 Latency(us) 00:21:48.475 [2024-11-19T16:16:55.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.475 [2024-11-19T16:16:55.190Z] =================================================================================================================== 00:21:48.475 [2024-11-19T16:16:55.190Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95961' 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95961 00:21:48.475 16:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95961 00:21:48.475 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95791 00:21:48.475 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 95791 ']' 00:21:48.475 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 95791 00:21:48.475 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:21:48.475 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:48.475 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95791 00:21:48.475 killing process with pid 95791 00:21:48.475 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:48.475 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:48.475 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95791' 00:21:48.475 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 95791 00:21:48.475 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 95791 00:21:48.734 ************************************ 00:21:48.734 END TEST nvmf_digest_error 00:21:48.734 ************************************ 00:21:48.734 00:21:48.734 real 0m14.231s 00:21:48.734 user 0m27.456s 00:21:48.734 sys 0m4.296s 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:48.734 rmmod nvme_tcp 00:21:48.734 rmmod nvme_fabrics 00:21:48.734 rmmod nvme_keyring 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 95791 ']' 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 95791 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 95791 ']' 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 95791 00:21:48.734 Process with pid 95791 is not found 00:21:48.734 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (95791) - No such process 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 95791 is not found' 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:48.734 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:48.993 00:21:48.993 real 0m30.488s 00:21:48.993 user 0m57.285s 00:21:48.993 sys 0m9.227s 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 ************************************ 00:21:48.993 END TEST nvmf_digest 00:21:48.993 ************************************ 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.993 ************************************ 00:21:48.993 START TEST nvmf_host_multipath 00:21:48.993 ************************************ 00:21:48.993 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:49.253 * Looking for test storage... 00:21:49.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:49.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.253 --rc genhtml_branch_coverage=1 00:21:49.253 --rc genhtml_function_coverage=1 00:21:49.253 --rc genhtml_legend=1 00:21:49.253 --rc geninfo_all_blocks=1 00:21:49.253 --rc geninfo_unexecuted_blocks=1 00:21:49.253 00:21:49.253 ' 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:49.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.253 --rc genhtml_branch_coverage=1 00:21:49.253 --rc genhtml_function_coverage=1 00:21:49.253 --rc genhtml_legend=1 00:21:49.253 --rc geninfo_all_blocks=1 00:21:49.253 --rc geninfo_unexecuted_blocks=1 00:21:49.253 00:21:49.253 ' 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:49.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.253 --rc genhtml_branch_coverage=1 00:21:49.253 --rc genhtml_function_coverage=1 00:21:49.253 --rc genhtml_legend=1 00:21:49.253 --rc geninfo_all_blocks=1 00:21:49.253 --rc geninfo_unexecuted_blocks=1 00:21:49.253 00:21:49.253 ' 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:49.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.253 --rc genhtml_branch_coverage=1 00:21:49.253 --rc genhtml_function_coverage=1 00:21:49.253 --rc genhtml_legend=1 00:21:49.253 --rc geninfo_all_blocks=1 00:21:49.253 --rc geninfo_unexecuted_blocks=1 00:21:49.253 00:21:49.253 ' 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.253 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:49.254 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:49.254 Cannot find device "nvmf_init_br" 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:49.254 Cannot find device "nvmf_init_br2" 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:49.254 Cannot find device "nvmf_tgt_br" 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:49.254 Cannot find device "nvmf_tgt_br2" 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:49.254 Cannot find device "nvmf_init_br" 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:49.254 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:49.254 Cannot find device "nvmf_init_br2" 00:21:49.255 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:49.255 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:49.513 Cannot find device "nvmf_tgt_br" 00:21:49.513 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:49.513 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:49.513 Cannot find device "nvmf_tgt_br2" 00:21:49.513 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:49.513 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:49.513 Cannot find device "nvmf_br" 00:21:49.513 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:49.513 16:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:49.513 Cannot find device "nvmf_init_if" 00:21:49.513 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:49.513 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:49.513 Cannot find device "nvmf_init_if2" 00:21:49.513 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:49.513 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:49.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.513 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:49.513 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:49.513 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.513 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:49.514 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:49.773 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:49.773 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:21:49.773 00:21:49.773 --- 10.0.0.3 ping statistics --- 00:21:49.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.773 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:49.773 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:49.773 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:21:49.773 00:21:49.773 --- 10.0.0.4 ping statistics --- 00:21:49.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.773 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:49.773 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.773 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:21:49.773 00:21:49.773 --- 10.0.0.1 ping statistics --- 00:21:49.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.773 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:49.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:21:49.773 00:21:49.773 --- 10.0.0.2 ping statistics --- 00:21:49.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.773 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=96264 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 96264 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96264 ']' 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.773 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:49.773 [2024-11-19 16:16:56.372401] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:21:49.773 [2024-11-19 16:16:56.372496] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.032 [2024-11-19 16:16:56.523531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:50.032 [2024-11-19 16:16:56.547410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.032 [2024-11-19 16:16:56.547469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.032 [2024-11-19 16:16:56.547483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.032 [2024-11-19 16:16:56.547493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.032 [2024-11-19 16:16:56.547502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.032 [2024-11-19 16:16:56.548389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.032 [2024-11-19 16:16:56.548395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.032 [2024-11-19 16:16:56.583838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:50.032 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:50.032 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:50.032 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.032 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.032 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:50.032 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.032 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96264 00:21:50.032 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:50.293 [2024-11-19 16:16:56.951462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.293 16:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:50.859 Malloc0 00:21:50.859 16:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:51.118 16:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:51.118 16:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:51.376 [2024-11-19 16:16:58.016015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:51.376 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:51.634 [2024-11-19 16:16:58.244101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:51.634 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96312 00:21:51.634 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:51.634 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:51.634 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96312 /var/tmp/bdevperf.sock 00:21:51.634 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 96312 ']' 00:21:51.634 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.634 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.634 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.634 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.634 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:51.893 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.893 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:21:51.893 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:52.151 16:16:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:52.409 Nvme0n1 00:21:52.409 16:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:52.975 Nvme0n1 00:21:52.975 16:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:52.975 16:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:53.983 16:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:53.983 16:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:53.983 16:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:54.549 16:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:54.549 16:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96350 00:21:54.549 16:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:54.549 16:17:00 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:01.113 16:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:01.114 16:17:06 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.114 Attaching 4 probes... 00:22:01.114 @path[10.0.0.3, 4421]: 20196 00:22:01.114 @path[10.0.0.3, 4421]: 20622 00:22:01.114 @path[10.0.0.3, 4421]: 20619 00:22:01.114 @path[10.0.0.3, 4421]: 20660 00:22:01.114 @path[10.0.0.3, 4421]: 20466 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96350 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96469 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:01.114 16:17:07 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:07.684 16:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:07.684 16:17:13 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:07.685 Attaching 4 probes... 00:22:07.685 @path[10.0.0.3, 4420]: 20371 00:22:07.685 @path[10.0.0.3, 4420]: 20657 00:22:07.685 @path[10.0.0.3, 4420]: 20812 00:22:07.685 @path[10.0.0.3, 4420]: 20979 00:22:07.685 @path[10.0.0.3, 4420]: 20994 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96469 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:07.685 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:07.944 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:07.944 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96576 00:22:07.944 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:07.944 16:17:14 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:14.510 Attaching 4 probes... 00:22:14.510 @path[10.0.0.3, 4421]: 13952 00:22:14.510 @path[10.0.0.3, 4421]: 20334 00:22:14.510 @path[10.0.0.3, 4421]: 20484 00:22:14.510 @path[10.0.0.3, 4421]: 20350 00:22:14.510 @path[10.0.0.3, 4421]: 20280 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96576 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:14.510 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:14.511 16:17:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:14.511 16:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:14.769 16:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:14.769 16:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96696 00:22:14.769 16:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:14.769 16:17:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:21.332 Attaching 4 probes... 00:22:21.332 00:22:21.332 00:22:21.332 00:22:21.332 00:22:21.332 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96696 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:21.332 16:17:27 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:21.590 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:21.590 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:21.590 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96815 00:22:21.590 16:17:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:28.154 Attaching 4 probes... 00:22:28.154 @path[10.0.0.3, 4421]: 19615 00:22:28.154 @path[10.0.0.3, 4421]: 19766 00:22:28.154 @path[10.0.0.3, 4421]: 19938 00:22:28.154 @path[10.0.0.3, 4421]: 20059 00:22:28.154 @path[10.0.0.3, 4421]: 20066 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96815 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:28.154 16:17:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:29.089 16:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:29.089 16:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=96934 00:22:29.089 16:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:29.089 16:17:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:35.654 16:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:35.654 16:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:35.655 16:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:35.655 16:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:35.655 Attaching 4 probes... 00:22:35.655 @path[10.0.0.3, 4420]: 19599 00:22:35.655 @path[10.0.0.3, 4420]: 20060 00:22:35.655 @path[10.0.0.3, 4420]: 20029 00:22:35.655 @path[10.0.0.3, 4420]: 19976 00:22:35.655 @path[10.0.0.3, 4420]: 20032 00:22:35.655 16:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:35.655 16:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:35.655 16:17:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:35.655 16:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:35.655 16:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:35.655 16:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:35.655 16:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 96934 00:22:35.655 16:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:35.655 16:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:35.655 [2024-11-19 16:17:42.269579] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:35.655 16:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:35.915 16:17:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:42.482 16:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:42.482 16:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97108 00:22:42.482 16:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96264 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:42.483 16:17:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:49.064 Attaching 4 probes... 00:22:49.064 @path[10.0.0.3, 4421]: 19591 00:22:49.064 @path[10.0.0.3, 4421]: 19888 00:22:49.064 @path[10.0.0.3, 4421]: 19896 00:22:49.064 @path[10.0.0.3, 4421]: 19980 00:22:49.064 @path[10.0.0.3, 4421]: 20040 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97108 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96312 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96312 ']' 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96312 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96312 00:22:49.064 killing process with pid 96312 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96312' 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96312 00:22:49.064 16:17:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96312 00:22:49.064 { 00:22:49.064 "results": [ 00:22:49.064 { 00:22:49.064 "job": "Nvme0n1", 00:22:49.064 "core_mask": "0x4", 00:22:49.064 "workload": "verify", 00:22:49.064 "status": "terminated", 00:22:49.064 "verify_range": { 00:22:49.064 "start": 0, 00:22:49.064 "length": 16384 00:22:49.064 }, 00:22:49.064 "queue_depth": 128, 00:22:49.064 "io_size": 4096, 00:22:49.064 "runtime": 55.388539, 00:22:49.064 "iops": 8552.50939187979, 00:22:49.064 "mibps": 33.40823981203043, 00:22:49.064 "io_failed": 0, 00:22:49.064 "io_timeout": 0, 00:22:49.064 "avg_latency_us": 14937.154066677784, 00:22:49.064 "min_latency_us": 1295.8254545454545, 00:22:49.064 "max_latency_us": 7015926.69090909 00:22:49.064 } 00:22:49.064 ], 00:22:49.064 "core_count": 1 00:22:49.064 } 00:22:49.064 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96312 00:22:49.064 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:49.064 [2024-11-19 16:16:58.309251] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:22:49.064 [2024-11-19 16:16:58.309346] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96312 ] 00:22:49.064 [2024-11-19 16:16:58.457440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.064 [2024-11-19 16:16:58.480874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.064 [2024-11-19 16:16:58.513479] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:49.064 Running I/O for 90 seconds... 00:22:49.064 7828.00 IOPS, 30.58 MiB/s [2024-11-19T16:17:55.779Z] 8755.50 IOPS, 34.20 MiB/s [2024-11-19T16:17:55.779Z] 9293.00 IOPS, 36.30 MiB/s [2024-11-19T16:17:55.779Z] 9545.50 IOPS, 37.29 MiB/s [2024-11-19T16:17:55.779Z] 9700.60 IOPS, 37.89 MiB/s [2024-11-19T16:17:55.779Z] 9805.17 IOPS, 38.30 MiB/s [2024-11-19T16:17:55.779Z] 9867.29 IOPS, 38.54 MiB/s [2024-11-19T16:17:55.779Z] 9900.88 IOPS, 38.68 MiB/s [2024-11-19T16:17:55.779Z] [2024-11-19 16:17:07.777519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.064 [2024-11-19 16:17:07.777572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:49.064 [2024-11-19 16:17:07.777647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.777669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.777690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.777706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.777741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.777756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.777775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.777790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.777809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.777823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.777842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.777856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.777875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.777889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.777908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.777923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.777942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.777978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.777999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-11-19 16:17:07.778777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.778926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.778950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.778965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.779011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.779026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.779046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.779061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.779094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.779108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.779127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.779141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.779160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.779174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:49.065 [2024-11-19 16:17:07.779192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:128688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.065 [2024-11-19 16:17:07.779206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:128712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:128728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.779547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.779581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.779615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.779648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:128280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.779682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.779716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.779749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.779782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:128792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.779976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.779990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.780024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.780057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.780091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.780136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.780175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.780208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.780253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.780307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.780348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.066 [2024-11-19 16:17:07.780386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.780426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.780460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.780495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.780529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.780563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.780597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:49.066 [2024-11-19 16:17:07.780632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.066 [2024-11-19 16:17:07.780646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.780666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.780680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.780699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.780713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.780738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.780753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.780773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.780794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.780815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.780829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.780849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:128920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.780863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.780882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.780896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.780915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.780930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.780949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.780963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.780982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.780997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.781583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.781597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.782988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.783018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.783071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.783088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.783108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.783123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.783142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.783157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.783176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.783190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.783210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:128560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.067 [2024-11-19 16:17:07.783224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.783243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.783258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.783294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.783311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.783331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.783345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.783365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.783379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.783399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.067 [2024-11-19 16:17:07.783413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:49.067 [2024-11-19 16:17:07.783432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:07.783446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:07.783465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:129000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:07.783480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:07.783526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:07.783547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:07.783568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:129016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:07.783582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:07.783602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:07.783617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:07.783636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:07.783650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:07.783669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:07.783684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:07.783703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:07.783718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:07.783737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:07.783751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:49.068 9903.11 IOPS, 38.68 MiB/s [2024-11-19T16:17:55.783Z] 9952.40 IOPS, 38.88 MiB/s [2024-11-19T16:17:55.783Z] 9987.64 IOPS, 39.01 MiB/s [2024-11-19T16:17:55.783Z] 10021.67 IOPS, 39.15 MiB/s [2024-11-19T16:17:55.783Z] 10053.54 IOPS, 39.27 MiB/s [2024-11-19T16:17:55.783Z] 10084.57 IOPS, 39.39 MiB/s [2024-11-19T16:17:55.783Z] [2024-11-19 16:17:14.348651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.348702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.348750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.348766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.348787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.348801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.348820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.348834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.348853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.348866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.348906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.348921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.348940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.348954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.348972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.348985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.349017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.068 [2024-11-19 16:17:14.349049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.068 [2024-11-19 16:17:14.349084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.068 [2024-11-19 16:17:14.349117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.068 [2024-11-19 16:17:14.349149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.068 [2024-11-19 16:17:14.349180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.068 [2024-11-19 16:17:14.349212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.068 [2024-11-19 16:17:14.349243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.068 [2024-11-19 16:17:14.349290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.349335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.349368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.349400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.349433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.349465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.349498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.349531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:49.068 [2024-11-19 16:17:14.349553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.068 [2024-11-19 16:17:14.349568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.349601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.349652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.349686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.349719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.349759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.349794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.349827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.349861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.349895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.349929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.349962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.349981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.349996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.350029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.350062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.350095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.069 [2024-11-19 16:17:14.350662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.350725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.350759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.350793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.350827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.350862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:49.069 [2024-11-19 16:17:14.350882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.069 [2024-11-19 16:17:14.350896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.350916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.350931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.350951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.350965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.350985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.350999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.070 [2024-11-19 16:17:14.351591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.070 [2024-11-19 16:17:14.351625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.070 [2024-11-19 16:17:14.351659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.070 [2024-11-19 16:17:14.351692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.070 [2024-11-19 16:17:14.351726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.070 [2024-11-19 16:17:14.351759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.070 [2024-11-19 16:17:14.351792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.070 [2024-11-19 16:17:14.351825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.351967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.351987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.352001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.352020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.352035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.352054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.352068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.352087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.352101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.352120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.352134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.352154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.352168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.352187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.352201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.352220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.070 [2024-11-19 16:17:14.352234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:49.070 [2024-11-19 16:17:14.352266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.071 [2024-11-19 16:17:14.352281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.352300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.071 [2024-11-19 16:17:14.352314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.352333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.071 [2024-11-19 16:17:14.352354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.071 [2024-11-19 16:17:14.353507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.353968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.353983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.354624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.354638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.355027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.355067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.355090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.071 [2024-11-19 16:17:14.355106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:49.071 [2024-11-19 16:17:14.355125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.071 [2024-11-19 16:17:14.355139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.355173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.355216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.355251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.355302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.355336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.355372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.355406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.355935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.355969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.355989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.356005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.356039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.356072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.356114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.356147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.356180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.072 [2024-11-19 16:17:14.356214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.356260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.356295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.356328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.356362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.356395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.356428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.356462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.072 [2024-11-19 16:17:14.356870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:49.072 [2024-11-19 16:17:14.356906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.073 [2024-11-19 16:17:14.356923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.356943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.073 [2024-11-19 16:17:14.356959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.356979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.073 [2024-11-19 16:17:14.356993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.357012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.073 [2024-11-19 16:17:14.357026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.357045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.073 [2024-11-19 16:17:14.357059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.357079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.073 [2024-11-19 16:17:14.357093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.357113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.073 [2024-11-19 16:17:14.357127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.357146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.073 [2024-11-19 16:17:14.357160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.357179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.357193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.357212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.357227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.357260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.357292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.369979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.369998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.370012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.370032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.370046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.370065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.370079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.370098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.073 [2024-11-19 16:17:14.370111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.370131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.073 [2024-11-19 16:17:14.370145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:49.073 [2024-11-19 16:17:14.370165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.370179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.370212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.370262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.370308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.370356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.370391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.370432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.370957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.370995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.371010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.371074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.074 [2024-11-19 16:17:14.371106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.074 [2024-11-19 16:17:14.371826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.074 [2024-11-19 16:17:14.371846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.371875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.371895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.371924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.371951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.371979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.372782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.075 [2024-11-19 16:17:14.372831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.075 [2024-11-19 16:17:14.372879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.075 [2024-11-19 16:17:14.372927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.372956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.075 [2024-11-19 16:17:14.372976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.075 [2024-11-19 16:17:14.373024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.075 [2024-11-19 16:17:14.373073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.075 [2024-11-19 16:17:14.373121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.075 [2024-11-19 16:17:14.373169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:49.075 [2024-11-19 16:17:14.373778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.075 [2024-11-19 16:17:14.373798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.373826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.373846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.373874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.373903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.373933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.373954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.373982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.374002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.374031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.374051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.374080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.374100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.374129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.374150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.374179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.374199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.374227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.374274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.374305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.374325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.374354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.374374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.374402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.374422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.374450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.374471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.374499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.374528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.076 [2024-11-19 16:17:14.377728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.377777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.377826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.377875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.377924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.377952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.377972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.378001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.378021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.378049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.378070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.378098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.378119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.378147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.378167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.378195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.076 [2024-11-19 16:17:14.378216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:49.076 [2024-11-19 16:17:14.378245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.378954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.378976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.077 [2024-11-19 16:17:14.379031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.077 [2024-11-19 16:17:14.379079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.077 [2024-11-19 16:17:14.379130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.077 [2024-11-19 16:17:14.379185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.077 [2024-11-19 16:17:14.379235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.077 [2024-11-19 16:17:14.379313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.077 [2024-11-19 16:17:14.379362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.077 [2024-11-19 16:17:14.379411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.379460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.379508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.379557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.379615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.379665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.379714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.379763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.379812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.379861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.379910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.379959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.379988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.380008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.380036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.380057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.380085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.380105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.380134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.380154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.380182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.077 [2024-11-19 16:17:14.380214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:49.077 [2024-11-19 16:17:14.380266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.380967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.380988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.078 [2024-11-19 16:17:14.381867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:49.078 [2024-11-19 16:17:14.381895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.381916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.381944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.381964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.381993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.382013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.382062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.382119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.382169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.382218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.382284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.382334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.382954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.382982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.383003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.383060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.383108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.383160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.383193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.383227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.383271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.383306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.383347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.383380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.079 [2024-11-19 16:17:14.383414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.383447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.383467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.383481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.385358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.385387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:49.079 [2024-11-19 16:17:14.385424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.079 [2024-11-19 16:17:14.385444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.385862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.385896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.385930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.385964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.385983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.385997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:15840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.080 [2024-11-19 16:17:14.386730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.080 [2024-11-19 16:17:14.386766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:49.080 [2024-11-19 16:17:14.386789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.386805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.386825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.386840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.386859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.386874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.386894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.386909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.386928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.386943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.386970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.386986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.081 [2024-11-19 16:17:14.387593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.387977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.387997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.388012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.388031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.388045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.388064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.388079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.388098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.081 [2024-11-19 16:17:14.388112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.081 [2024-11-19 16:17:14.388131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.388753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.082 [2024-11-19 16:17:14.388804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:15624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.082 [2024-11-19 16:17:14.388838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.082 [2024-11-19 16:17:14.388873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.082 [2024-11-19 16:17:14.388907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.082 [2024-11-19 16:17:14.388941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.082 [2024-11-19 16:17:14.388976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.388995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.082 [2024-11-19 16:17:14.389010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.389030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.082 [2024-11-19 16:17:14.389045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.389085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.389101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.389121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.389136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.389162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.389177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.389197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.389211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.389230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.389244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.389264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.389291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.389312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.389326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:49.082 [2024-11-19 16:17:14.389346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.082 [2024-11-19 16:17:14.389360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:14.389394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:14.389427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:14.389461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:14.389494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:14.389528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:14.389561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:14.389604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:14.389639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:14.389674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:14.389707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:14.389742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:14.389775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:14.389808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:14.389842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:14.389875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.389895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:14.389909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:14.390270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:14.390295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:49.083 9960.53 IOPS, 38.91 MiB/s [2024-11-19T16:17:55.798Z] 9436.44 IOPS, 36.86 MiB/s [2024-11-19T16:17:55.798Z] 9474.82 IOPS, 37.01 MiB/s [2024-11-19T16:17:55.798Z] 9513.33 IOPS, 37.16 MiB/s [2024-11-19T16:17:55.798Z] 9548.63 IOPS, 37.30 MiB/s [2024-11-19T16:17:55.798Z] 9583.20 IOPS, 37.43 MiB/s [2024-11-19T16:17:55.798Z] 9611.81 IOPS, 37.55 MiB/s [2024-11-19T16:17:55.798Z] [2024-11-19 16:17:21.395460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.395975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.395993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.396006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.396037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.396051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.396070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.396084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.396102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.083 [2024-11-19 16:17:21.396115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.396135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:21.396148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.396168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:21.396181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.396200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:21.396214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.083 [2024-11-19 16:17:21.396233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.083 [2024-11-19 16:17:21.396274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.396468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.396515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.396547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.396580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.396612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.396645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.396677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.396709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.396978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.396996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.397010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.397042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.397075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.397106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.397139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.397171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.397203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.084 [2024-11-19 16:17:21.397261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.397303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.397345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.397380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.397413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.397447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.397480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.397513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.397546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.397597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:109000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.084 [2024-11-19 16:17:21.397631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:49.084 [2024-11-19 16:17:21.397682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.397696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.397716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:109016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.397730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.397750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.397764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.397784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.397798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.397824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.397839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.397859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.397873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.397893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.397907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.397927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.397942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.397961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.397991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:109080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.398024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:109088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.398058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:109096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.398092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.398125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.398159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:108496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:108576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:108592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.085 [2024-11-19 16:17:21.398769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.398808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.398844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.398879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.398914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.398949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.398969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:109160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.398984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.399004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.399033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.399053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.399082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:49.085 [2024-11-19 16:17:21.399101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:109184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.085 [2024-11-19 16:17:21.399115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:109192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.399148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.399190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.399225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:109216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.399258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:109224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.399292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:108648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:108672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.399768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.399782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.400490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.086 [2024-11-19 16:17:21.400537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.400578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.400619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:109248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.400660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.400699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.400740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.400792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.400832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.400891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.400933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:21.400960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:21.400975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:49.086 9552.00 IOPS, 37.31 MiB/s [2024-11-19T16:17:55.801Z] 9136.70 IOPS, 35.69 MiB/s [2024-11-19T16:17:55.801Z] 8756.00 IOPS, 34.20 MiB/s [2024-11-19T16:17:55.801Z] 8405.76 IOPS, 32.84 MiB/s [2024-11-19T16:17:55.801Z] 8082.46 IOPS, 31.57 MiB/s [2024-11-19T16:17:55.801Z] 7783.11 IOPS, 30.40 MiB/s [2024-11-19T16:17:55.801Z] 7505.14 IOPS, 29.32 MiB/s [2024-11-19T16:17:55.801Z] 7294.59 IOPS, 28.49 MiB/s [2024-11-19T16:17:55.801Z] 7378.90 IOPS, 28.82 MiB/s [2024-11-19T16:17:55.801Z] 7459.84 IOPS, 29.14 MiB/s [2024-11-19T16:17:55.801Z] 7539.22 IOPS, 29.45 MiB/s [2024-11-19T16:17:55.801Z] 7616.45 IOPS, 29.75 MiB/s [2024-11-19T16:17:55.801Z] 7685.38 IOPS, 30.02 MiB/s [2024-11-19T16:17:55.801Z] 7746.71 IOPS, 30.26 MiB/s [2024-11-19T16:17:55.801Z] [2024-11-19 16:17:34.725337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:34.725387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:34.725453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:86656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:34.725473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:34.725495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:86664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:34.725510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:34.725529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:86672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:34.725543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:49.086 [2024-11-19 16:17:34.725561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.086 [2024-11-19 16:17:34.725575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.725594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.087 [2024-11-19 16:17:34.725626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.725648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.087 [2024-11-19 16:17:34.725662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.725681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.087 [2024-11-19 16:17:34.725694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.725713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.725726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.725745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.725758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.725777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.725806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.725825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.725840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.725859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.725873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.725892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.725922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.725951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.725965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.725984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.725998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:86144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:86152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:86176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:86184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:86200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:86208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:86224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:86232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:86240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.087 [2024-11-19 16:17:34.726784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:86712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.087 [2024-11-19 16:17:34.726853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.087 [2024-11-19 16:17:34.726886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.087 [2024-11-19 16:17:34.726916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:86736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.087 [2024-11-19 16:17:34.726947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.726962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:86744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.087 [2024-11-19 16:17:34.727007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.727022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.087 [2024-11-19 16:17:34.727035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.727050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.087 [2024-11-19 16:17:34.727064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.087 [2024-11-19 16:17:34.727078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:86768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:86792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:86832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:86272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:86280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:86288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:86304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:86312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:86320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:86336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:86352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:86368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:86376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:86384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.727816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:86840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:86848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:86864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:86880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.727983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.727997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.728010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.728023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:86896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.088 [2024-11-19 16:17:34.728040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.728054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:86392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.728068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.728081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.728095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.728108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:86408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.728121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.728135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:86416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.728148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.728161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:86424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.728174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.728188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:86432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.728201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.728214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.088 [2024-11-19 16:17:34.728227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.088 [2024-11-19 16:17:34.728252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:86448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.728266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:86936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:86952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:86968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:86976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:86992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:87008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:87016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:49.089 [2024-11-19 16:17:34.728709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:86456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.728736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:86464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.728766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.728793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.728819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:86488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.728846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.728873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:86504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.728899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:86512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.728926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:86520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.728952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:86528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.728985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.728999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:86536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:86544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:86552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:86560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:86568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:86592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:86600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:86608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:86616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.089 [2024-11-19 16:17:34.729319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.089 [2024-11-19 16:17:34.729333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:86632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.090 [2024-11-19 16:17:34.729352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.090 [2024-11-19 16:17:34.729366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7d290 is same with the state(6) to be set 00:22:49.090 [2024-11-19 16:17:34.729382] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.090 [2024-11-19 16:17:34.729391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.090 [2024-11-19 16:17:34.729401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86640 len:8 PRP1 0x0 PRP2 0x0 00:22:49.090 [2024-11-19 16:17:34.729412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.090 [2024-11-19 16:17:34.729425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.090 [2024-11-19 16:17:34.729434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.090 [2024-11-19 16:17:34.729444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87032 len:8 PRP1 0x0 PRP2 0x0 00:22:49.090 [2024-11-19 16:17:34.729455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.090 [2024-11-19 16:17:34.729467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.090 [2024-11-19 16:17:34.729476] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.090 [2024-11-19 16:17:34.729486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87040 len:8 PRP1 0x0 PRP2 0x0 00:22:49.090 [2024-11-19 16:17:34.729497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.090 [2024-11-19 16:17:34.729508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.090 [2024-11-19 16:17:34.729518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.090 [2024-11-19 16:17:34.729527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87048 len:8 PRP1 0x0 PRP2 0x0 00:22:49.090 [2024-11-19 16:17:34.729538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.090 [2024-11-19 16:17:34.729550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.090 [2024-11-19 16:17:34.729559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.090 [2024-11-19 16:17:34.729568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87056 len:8 PRP1 0x0 PRP2 0x0 00:22:49.090 [2024-11-19 16:17:34.729580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.090 [2024-11-19 16:17:34.729594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.090 [2024-11-19 16:17:34.729605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.090 [2024-11-19 16:17:34.729614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87064 len:8 PRP1 0x0 PRP2 0x0 00:22:49.090 [2024-11-19 16:17:34.729625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.090 [2024-11-19 16:17:34.729637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.090 [2024-11-19 16:17:34.729647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.090 [2024-11-19 16:17:34.729656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87072 len:8 PRP1 0x0 PRP2 0x0 00:22:49.090 [2024-11-19 16:17:34.729667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.090 [2024-11-19 16:17:34.729685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.090 [2024-11-19 16:17:34.729695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.090 [2024-11-19 16:17:34.729704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87080 len:8 PRP1 0x0 PRP2 0x0 00:22:49.090 [2024-11-19 16:17:34.729715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.090 [2024-11-19 16:17:34.729727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:49.090 [2024-11-19 16:17:34.729736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:49.090 [2024-11-19 16:17:34.729745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:87088 len:8 PRP1 0x0 PRP2 0x0 00:22:49.090 [2024-11-19 16:17:34.729757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.090 [2024-11-19 16:17:34.730821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:49.090 [2024-11-19 16:17:34.730898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.090 [2024-11-19 16:17:34.730920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:49.090 [2024-11-19 16:17:34.730949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c61510 (9): Bad file descriptor 00:22:49.090 [2024-11-19 16:17:34.731349] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:49.090 [2024-11-19 16:17:34.731382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c61510 with addr=10.0.0.3, port=4421 00:22:49.090 [2024-11-19 16:17:34.731397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c61510 is same with the state(6) to be set 00:22:49.090 [2024-11-19 16:17:34.731431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c61510 (9): Bad file descriptor 00:22:49.090 [2024-11-19 16:17:34.731460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:22:49.090 [2024-11-19 16:17:34.731473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:22:49.090 [2024-11-19 16:17:34.731485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:22:49.090 [2024-11-19 16:17:34.731497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:22:49.090 [2024-11-19 16:17:34.731510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:22:49.090 7806.89 IOPS, 30.50 MiB/s [2024-11-19T16:17:55.805Z] 7855.35 IOPS, 30.68 MiB/s [2024-11-19T16:17:55.805Z] 7912.00 IOPS, 30.91 MiB/s [2024-11-19T16:17:55.805Z] 7968.82 IOPS, 31.13 MiB/s [2024-11-19T16:17:55.805Z] 8020.60 IOPS, 31.33 MiB/s [2024-11-19T16:17:55.805Z] 8068.49 IOPS, 31.52 MiB/s [2024-11-19T16:17:55.805Z] 8114.67 IOPS, 31.70 MiB/s [2024-11-19T16:17:55.805Z] 8149.95 IOPS, 31.84 MiB/s [2024-11-19T16:17:55.805Z] 8191.45 IOPS, 32.00 MiB/s [2024-11-19T16:17:55.805Z] 8234.67 IOPS, 32.17 MiB/s [2024-11-19T16:17:55.805Z] [2024-11-19 16:17:44.797199] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:49.090 8273.54 IOPS, 32.32 MiB/s [2024-11-19T16:17:55.805Z] 8310.23 IOPS, 32.46 MiB/s [2024-11-19T16:17:55.805Z] 8345.31 IOPS, 32.60 MiB/s [2024-11-19T16:17:55.805Z] 8378.76 IOPS, 32.73 MiB/s [2024-11-19T16:17:55.805Z] 8405.54 IOPS, 32.83 MiB/s [2024-11-19T16:17:55.805Z] 8437.31 IOPS, 32.96 MiB/s [2024-11-19T16:17:55.805Z] 8466.44 IOPS, 33.07 MiB/s [2024-11-19T16:17:55.805Z] 8494.02 IOPS, 33.18 MiB/s [2024-11-19T16:17:55.805Z] 8520.87 IOPS, 33.28 MiB/s [2024-11-19T16:17:55.805Z] 8546.89 IOPS, 33.39 MiB/s [2024-11-19T16:17:55.805Z] Received shutdown signal, test time was about 55.389373 seconds 00:22:49.090 00:22:49.090 Latency(us) 00:22:49.090 [2024-11-19T16:17:55.805Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.090 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:49.090 Verification LBA range: start 0x0 length 0x4000 00:22:49.090 Nvme0n1 : 55.39 8552.51 33.41 0.00 0.00 14937.15 1295.83 7015926.69 00:22:49.090 [2024-11-19T16:17:55.805Z] =================================================================================================================== 00:22:49.090 [2024-11-19T16:17:55.805Z] Total : 8552.51 33.41 0.00 0.00 14937.15 1295.83 7015926.69 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:49.090 rmmod nvme_tcp 00:22:49.090 rmmod nvme_fabrics 00:22:49.090 rmmod nvme_keyring 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 96264 ']' 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 96264 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 96264 ']' 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 96264 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96264 00:22:49.090 killing process with pid 96264 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96264' 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 96264 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 96264 00:22:49.090 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:49.091 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:49.351 ************************************ 00:22:49.351 END TEST nvmf_host_multipath 00:22:49.351 ************************************ 00:22:49.351 00:22:49.351 real 1m0.126s 00:22:49.351 user 2m46.511s 00:22:49.351 sys 0m17.829s 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:49.351 ************************************ 00:22:49.351 START TEST nvmf_timeout 00:22:49.351 ************************************ 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:49.351 * Looking for test storage... 00:22:49.351 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:22:49.351 16:17:55 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:49.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.351 --rc genhtml_branch_coverage=1 00:22:49.351 --rc genhtml_function_coverage=1 00:22:49.351 --rc genhtml_legend=1 00:22:49.351 --rc geninfo_all_blocks=1 00:22:49.351 --rc geninfo_unexecuted_blocks=1 00:22:49.351 00:22:49.351 ' 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:49.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.351 --rc genhtml_branch_coverage=1 00:22:49.351 --rc genhtml_function_coverage=1 00:22:49.351 --rc genhtml_legend=1 00:22:49.351 --rc geninfo_all_blocks=1 00:22:49.351 --rc geninfo_unexecuted_blocks=1 00:22:49.351 00:22:49.351 ' 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:49.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.351 --rc genhtml_branch_coverage=1 00:22:49.351 --rc genhtml_function_coverage=1 00:22:49.351 --rc genhtml_legend=1 00:22:49.351 --rc geninfo_all_blocks=1 00:22:49.351 --rc geninfo_unexecuted_blocks=1 00:22:49.351 00:22:49.351 ' 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:49.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.351 --rc genhtml_branch_coverage=1 00:22:49.351 --rc genhtml_function_coverage=1 00:22:49.351 --rc genhtml_legend=1 00:22:49.351 --rc geninfo_all_blocks=1 00:22:49.351 --rc geninfo_unexecuted_blocks=1 00:22:49.351 00:22:49.351 ' 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.351 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.352 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:49.352 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:49.611 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:49.612 Cannot find device "nvmf_init_br" 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:49.612 Cannot find device "nvmf_init_br2" 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:49.612 Cannot find device "nvmf_tgt_br" 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:49.612 Cannot find device "nvmf_tgt_br2" 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:49.612 Cannot find device "nvmf_init_br" 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:49.612 Cannot find device "nvmf_init_br2" 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:49.612 Cannot find device "nvmf_tgt_br" 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:49.612 Cannot find device "nvmf_tgt_br2" 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:49.612 Cannot find device "nvmf_br" 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:49.612 Cannot find device "nvmf_init_if" 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:49.612 Cannot find device "nvmf_init_if2" 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:49.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:49.612 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:49.612 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:49.871 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:49.871 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:22:49.871 00:22:49.871 --- 10.0.0.3 ping statistics --- 00:22:49.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.871 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:49.871 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:49.871 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:22:49.871 00:22:49.871 --- 10.0.0.4 ping statistics --- 00:22:49.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.871 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:49.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:49.871 00:22:49.871 --- 10.0.0.1 ping statistics --- 00:22:49.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.871 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:49.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:22:49.871 00:22:49.871 --- 10.0.0.2 ping statistics --- 00:22:49.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.871 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.871 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:49.872 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=97467 00:22:49.872 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:49.872 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 97467 00:22:49.872 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97467 ']' 00:22:49.872 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.872 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.872 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.872 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.872 16:17:56 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:49.872 [2024-11-19 16:17:56.537548] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:22:49.872 [2024-11-19 16:17:56.537670] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.134 [2024-11-19 16:17:56.686441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:50.134 [2024-11-19 16:17:56.704980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.134 [2024-11-19 16:17:56.705045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.134 [2024-11-19 16:17:56.705055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.134 [2024-11-19 16:17:56.705062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.134 [2024-11-19 16:17:56.705068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.134 [2024-11-19 16:17:56.705948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.134 [2024-11-19 16:17:56.705959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.134 [2024-11-19 16:17:56.733177] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:51.112 16:17:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.112 16:17:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:51.112 16:17:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.112 16:17:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.112 16:17:57 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:51.112 16:17:57 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.112 16:17:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:51.112 16:17:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:51.112 [2024-11-19 16:17:57.716380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.112 16:17:57 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:51.371 Malloc0 00:22:51.371 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:51.630 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:51.889 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:52.148 [2024-11-19 16:17:58.691354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:52.148 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=97516 00:22:52.148 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:52.148 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 97516 /var/tmp/bdevperf.sock 00:22:52.148 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97516 ']' 00:22:52.148 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.148 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.148 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.148 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.148 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:52.148 [2024-11-19 16:17:58.751296] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:22:52.148 [2024-11-19 16:17:58.751394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97516 ] 00:22:52.407 [2024-11-19 16:17:58.902160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.407 [2024-11-19 16:17:58.926812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.407 [2024-11-19 16:17:58.959702] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:52.407 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.407 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:22:52.407 16:17:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:52.670 16:17:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:52.928 NVMe0n1 00:22:52.928 16:17:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=97532 00:22:52.928 16:17:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:52.928 16:17:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:52.928 Running I/O for 10 seconds... 00:22:53.863 16:18:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:54.124 7701.00 IOPS, 30.08 MiB/s [2024-11-19T16:18:00.839Z] [2024-11-19 16:18:00.808882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.808944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.808981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.808992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:73584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:73592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.809850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.809979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:73688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:73712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:73720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.124 [2024-11-19 16:18:00.810840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.124 [2024-11-19 16:18:00.810850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:73736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:73760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:73816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:73824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.811944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.811953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.812088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.812214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.812230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.812324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.812340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.812350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.812361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:72872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.812371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.812382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.812391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.812645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.812669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.812683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.812693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.812798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.812812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.812823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:72912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.812833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.812963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.812977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:72928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.813228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.813282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.813304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:72952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.813331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.813352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:72968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.813372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.813758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.813781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.813816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.125 [2024-11-19 16:18:00.813836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:72992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.813855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.813866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:73000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.813986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.814112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.814136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.814379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.814405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.814418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:73024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.125 [2024-11-19 16:18:00.814429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.125 [2024-11-19 16:18:00.814441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.814451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.814463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:73040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.814473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.814484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.126 [2024-11-19 16:18:00.814494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.814505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:73048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.814515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.814851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:73056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.814867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.814878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.814888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.814900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.814909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.814920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:73080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.814929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.814941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:73088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.814950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.815206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.815223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.815368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:73104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.815511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.815648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:73112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.815774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.815899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:73120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.815920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.816158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.816181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.816194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:73136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.816204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.816216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:73144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.816226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.816255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:73152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.816273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.816436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.816706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.816794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:73168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.816806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.816817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:73176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.816826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.816837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:73184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.816846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.816857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.816866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.816877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:73200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.816886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:73216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:73232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:73256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:73264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:73280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.817964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:73304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.817973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.818097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.818221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.818245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.818469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.818485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:73328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.818495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.818506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:73336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.126 [2024-11-19 16:18:00.818515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.126 [2024-11-19 16:18:00.818527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:73344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.818824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.818847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:73352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.818956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.818973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.818983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.818995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.819216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.819230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:73376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.819274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.819288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:73384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.819300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.819312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:73392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.819322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.819333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.819342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.819353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:73408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.819686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.819702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.819711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.819723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.819732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.819743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:73432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.819752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.819763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.819773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.819784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:73448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.819920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.819937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:73456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.820183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.820203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:73464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.820212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.820223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:73472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.820442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.820457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.127 [2024-11-19 16:18:00.820467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.820478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae2d50 is same with the state(6) to be set 00:22:54.127 [2024-11-19 16:18:00.820491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.127 [2024-11-19 16:18:00.820499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.127 [2024-11-19 16:18:00.820746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73488 len:8 PRP1 0x0 PRP2 0x0 00:22:54.127 [2024-11-19 16:18:00.820766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.821106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.127 [2024-11-19 16:18:00.821135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.821148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.127 [2024-11-19 16:18:00.821169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.821179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.127 [2024-11-19 16:18:00.821188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.821197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.127 [2024-11-19 16:18:00.821206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.127 [2024-11-19 16:18:00.821215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4040 is same with the state(6) to be set 00:22:54.127 [2024-11-19 16:18:00.821826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:54.127 [2024-11-19 16:18:00.821860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac4040 (9): Bad file descriptor 00:22:54.127 [2024-11-19 16:18:00.822168] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.127 [2024-11-19 16:18:00.822202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac4040 with addr=10.0.0.3, port=4420 00:22:54.127 [2024-11-19 16:18:00.822215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4040 is same with the state(6) to be set 00:22:54.127 [2024-11-19 16:18:00.822247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac4040 (9): Bad file descriptor 00:22:54.127 [2024-11-19 16:18:00.822517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:54.127 [2024-11-19 16:18:00.822543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:54.127 [2024-11-19 16:18:00.822556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:54.127 [2024-11-19 16:18:00.822567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:54.127 [2024-11-19 16:18:00.822578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:54.127 16:18:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:56.048 4554.50 IOPS, 17.79 MiB/s [2024-11-19T16:18:03.022Z] 3036.33 IOPS, 11.86 MiB/s [2024-11-19T16:18:03.022Z] [2024-11-19 16:18:02.822715] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.307 [2024-11-19 16:18:02.822773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac4040 with addr=10.0.0.3, port=4420 00:22:56.307 [2024-11-19 16:18:02.822791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4040 is same with the state(6) to be set 00:22:56.307 [2024-11-19 16:18:02.822816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac4040 (9): Bad file descriptor 00:22:56.307 [2024-11-19 16:18:02.822836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:56.307 [2024-11-19 16:18:02.822846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:56.307 [2024-11-19 16:18:02.822858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:56.307 [2024-11-19 16:18:02.822869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:56.307 [2024-11-19 16:18:02.822881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:56.307 16:18:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:56.307 16:18:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:56.307 16:18:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:56.566 16:18:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:56.566 16:18:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:56.566 16:18:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:56.566 16:18:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:56.824 16:18:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:56.824 16:18:03 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:22:58.014 2277.25 IOPS, 8.90 MiB/s [2024-11-19T16:18:04.987Z] 1821.80 IOPS, 7.12 MiB/s [2024-11-19T16:18:04.987Z] [2024-11-19 16:18:04.822991] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.272 [2024-11-19 16:18:04.823072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ac4040 with addr=10.0.0.3, port=4420 00:22:58.272 [2024-11-19 16:18:04.823102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4040 is same with the state(6) to be set 00:22:58.272 [2024-11-19 16:18:04.823125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac4040 (9): Bad file descriptor 00:22:58.272 [2024-11-19 16:18:04.823143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:58.272 [2024-11-19 16:18:04.823153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:58.272 [2024-11-19 16:18:04.823163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:58.272 [2024-11-19 16:18:04.823173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:58.272 [2024-11-19 16:18:04.823190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:00.143 1518.17 IOPS, 5.93 MiB/s [2024-11-19T16:18:06.858Z] 1301.29 IOPS, 5.08 MiB/s [2024-11-19T16:18:06.858Z] [2024-11-19 16:18:06.823256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:00.143 [2024-11-19 16:18:06.823548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:00.143 [2024-11-19 16:18:06.823564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:00.143 [2024-11-19 16:18:06.823574] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:23:00.143 [2024-11-19 16:18:06.823587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:01.336 1138.62 IOPS, 4.45 MiB/s 00:23:01.336 Latency(us) 00:23:01.336 [2024-11-19T16:18:08.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.336 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:01.336 Verification LBA range: start 0x0 length 0x4000 00:23:01.336 NVMe0n1 : 8.20 1111.39 4.34 15.62 0.00 113628.71 3381.06 7046430.72 00:23:01.336 [2024-11-19T16:18:08.051Z] =================================================================================================================== 00:23:01.336 [2024-11-19T16:18:08.051Z] Total : 1111.39 4.34 15.62 0.00 113628.71 3381.06 7046430.72 00:23:01.336 { 00:23:01.336 "results": [ 00:23:01.336 { 00:23:01.336 "job": "NVMe0n1", 00:23:01.336 "core_mask": "0x4", 00:23:01.336 "workload": "verify", 00:23:01.336 "status": "finished", 00:23:01.336 "verify_range": { 00:23:01.336 "start": 0, 00:23:01.336 "length": 16384 00:23:01.336 }, 00:23:01.336 "queue_depth": 128, 00:23:01.336 "io_size": 4096, 00:23:01.336 "runtime": 8.196015, 00:23:01.336 "iops": 1111.3937688010576, 00:23:01.336 "mibps": 4.341381909379131, 00:23:01.336 "io_failed": 128, 00:23:01.336 "io_timeout": 0, 00:23:01.336 "avg_latency_us": 113628.71031169112, 00:23:01.336 "min_latency_us": 3381.061818181818, 00:23:01.336 "max_latency_us": 7046430.72 00:23:01.336 } 00:23:01.336 ], 00:23:01.336 "core_count": 1 00:23:01.336 } 00:23:01.904 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:01.904 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:01.904 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 97532 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 97516 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97516 ']' 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97516 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.163 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97516 00:23:02.422 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:02.422 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:02.422 killing process with pid 97516 00:23:02.422 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97516' 00:23:02.422 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97516 00:23:02.422 16:18:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97516 00:23:02.422 Received shutdown signal, test time was about 9.270024 seconds 00:23:02.422 00:23:02.422 Latency(us) 00:23:02.422 [2024-11-19T16:18:09.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.422 [2024-11-19T16:18:09.137Z] =================================================================================================================== 00:23:02.422 [2024-11-19T16:18:09.137Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.422 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:02.686 [2024-11-19 16:18:09.274460] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:02.686 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=97654 00:23:02.686 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:02.686 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 97654 /var/tmp/bdevperf.sock 00:23:02.686 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97654 ']' 00:23:02.686 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.686 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.686 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.686 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.686 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:02.686 [2024-11-19 16:18:09.343942] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:23:02.686 [2024-11-19 16:18:09.344045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97654 ] 00:23:02.949 [2024-11-19 16:18:09.485072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.949 [2024-11-19 16:18:09.506273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.949 [2024-11-19 16:18:09.536742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:02.949 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.949 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:02.949 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:03.207 16:18:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:03.466 NVMe0n1 00:23:03.466 16:18:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=97666 00:23:03.466 16:18:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:03.466 16:18:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:03.724 Running I/O for 10 seconds... 00:23:04.661 16:18:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:04.921 7445.00 IOPS, 29.08 MiB/s [2024-11-19T16:18:11.636Z] [2024-11-19 16:18:11.401796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.401858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.401896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.401907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.401919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.401928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.401938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.401946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.401956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.401964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.401974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.401983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.401993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.402671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.402692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.403113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.403144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.403155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.403167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.403177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.403188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.403197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.403208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.403217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.403228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.403267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.403297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.403564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.403972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.403988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.403999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.404793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.404953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.405202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.405222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.405233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.405264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.405275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.405287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.405297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.405308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.405318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.405330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.405339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.405351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.405361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.405372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.405382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.405393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.405403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.405415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.405425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.921 [2024-11-19 16:18:11.405436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.921 [2024-11-19 16:18:11.405446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.405457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.405466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.405478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.405487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.405498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.405508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.405519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.405528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.405540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.405549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.405560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.405570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.405581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.405591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.405603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.405612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.405623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.405633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.405735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.405752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.405763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.405772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.406503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:65704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.406524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:65712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.406544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:65720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.406580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.406601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.406729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.406741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:65776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:65784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:65808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:65816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.407510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.407531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.407819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.407829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:04.922 [2024-11-19 16:18:11.408224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.408783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.408794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.409086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.409196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:04.922 [2024-11-19 16:18:11.409210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.409221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x995d50 is same with the state(6) to be set 00:23:04.922 [2024-11-19 16:18:11.409246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:04.922 [2024-11-19 16:18:11.409256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:04.922 [2024-11-19 16:18:11.409265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66064 len:8 PRP1 0x0 PRP2 0x0 00:23:04.922 [2024-11-19 16:18:11.409275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.922 [2024-11-19 16:18:11.409420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.923 [2024-11-19 16:18:11.409438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.923 [2024-11-19 16:18:11.409450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.923 [2024-11-19 16:18:11.409459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.923 [2024-11-19 16:18:11.409469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.923 [2024-11-19 16:18:11.409478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.923 [2024-11-19 16:18:11.409488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.923 [2024-11-19 16:18:11.409497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.923 [2024-11-19 16:18:11.409507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977040 is same with the state(6) to be set 00:23:04.923 [2024-11-19 16:18:11.409998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:04.923 [2024-11-19 16:18:11.410344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x977040 (9): Bad file descriptor 00:23:04.923 [2024-11-19 16:18:11.410476] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:04.923 [2024-11-19 16:18:11.410499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x977040 with addr=10.0.0.3, port=4420 00:23:04.923 [2024-11-19 16:18:11.410514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977040 is same with the state(6) to be set 00:23:04.923 [2024-11-19 16:18:11.410533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x977040 (9): Bad file descriptor 00:23:04.923 [2024-11-19 16:18:11.410549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:04.923 [2024-11-19 16:18:11.410559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:04.923 [2024-11-19 16:18:11.410584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:04.923 [2024-11-19 16:18:11.410593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:04.923 [2024-11-19 16:18:11.410603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:04.923 16:18:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:05.857 4106.50 IOPS, 16.04 MiB/s [2024-11-19T16:18:12.572Z] [2024-11-19 16:18:12.410745] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:05.857 [2024-11-19 16:18:12.411143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x977040 with addr=10.0.0.3, port=4420 00:23:05.857 [2024-11-19 16:18:12.411168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977040 is same with the state(6) to be set 00:23:05.857 [2024-11-19 16:18:12.411194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x977040 (9): Bad file descriptor 00:23:05.857 [2024-11-19 16:18:12.411228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:23:05.857 [2024-11-19 16:18:12.411238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:23:05.857 [2024-11-19 16:18:12.411248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:23:05.857 [2024-11-19 16:18:12.411258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:23:05.857 [2024-11-19 16:18:12.411304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:23:05.857 16:18:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:06.116 [2024-11-19 16:18:12.669544] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:06.116 16:18:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 97666 00:23:06.941 2737.67 IOPS, 10.69 MiB/s [2024-11-19T16:18:13.656Z] [2024-11-19 16:18:13.429981] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:08.813 2053.25 IOPS, 8.02 MiB/s [2024-11-19T16:18:16.464Z] 3468.00 IOPS, 13.55 MiB/s [2024-11-19T16:18:17.403Z] 4619.33 IOPS, 18.04 MiB/s [2024-11-19T16:18:18.341Z] 5441.71 IOPS, 21.26 MiB/s [2024-11-19T16:18:19.719Z] 6049.50 IOPS, 23.63 MiB/s [2024-11-19T16:18:20.659Z] 6518.67 IOPS, 25.46 MiB/s [2024-11-19T16:18:20.659Z] 6903.60 IOPS, 26.97 MiB/s 00:23:13.944 Latency(us) 00:23:13.944 [2024-11-19T16:18:20.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:13.944 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:13.944 Verification LBA range: start 0x0 length 0x4000 00:23:13.944 NVMe0n1 : 10.01 6907.57 26.98 0.00 0.00 18492.20 1355.40 3035150.89 00:23:13.944 [2024-11-19T16:18:20.659Z] =================================================================================================================== 00:23:13.944 [2024-11-19T16:18:20.659Z] Total : 6907.57 26.98 0.00 0.00 18492.20 1355.40 3035150.89 00:23:13.944 { 00:23:13.944 "results": [ 00:23:13.944 { 00:23:13.944 "job": "NVMe0n1", 00:23:13.944 "core_mask": "0x4", 00:23:13.944 "workload": "verify", 00:23:13.944 "status": "finished", 00:23:13.944 "verify_range": { 00:23:13.944 "start": 0, 00:23:13.944 "length": 16384 00:23:13.944 }, 00:23:13.944 "queue_depth": 128, 00:23:13.944 "io_size": 4096, 00:23:13.944 "runtime": 10.008154, 00:23:13.944 "iops": 6907.567569403908, 00:23:13.944 "mibps": 26.982685817984017, 00:23:13.944 "io_failed": 0, 00:23:13.944 "io_timeout": 0, 00:23:13.944 "avg_latency_us": 18492.200942386895, 00:23:13.944 "min_latency_us": 1355.4036363636365, 00:23:13.944 "max_latency_us": 3035150.8945454545 00:23:13.944 } 00:23:13.944 ], 00:23:13.944 "core_count": 1 00:23:13.944 } 00:23:13.944 16:18:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=97772 00:23:13.944 16:18:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:13.944 16:18:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:13.944 Running I/O for 10 seconds... 00:23:14.917 16:18:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:14.917 7588.00 IOPS, 29.64 MiB/s [2024-11-19T16:18:21.632Z] [2024-11-19 16:18:21.602425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.917 [2024-11-19 16:18:21.602660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.917 [2024-11-19 16:18:21.602884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.917 [2024-11-19 16:18:21.603074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.603228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.603426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.603572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.603729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.603866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.604014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.604184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.604337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.604482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.604609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.604771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.604910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.918 [2024-11-19 16:18:21.605614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21274a0 is same with the state(6) to be set 00:23:14.919 [2024-11-19 16:18:21.605984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.919 [2024-11-19 16:18:21.606481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.919 [2024-11-19 16:18:21.606492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.606965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.606976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.607022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.607042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.607062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.607082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.607101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.607121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.607142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.607161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.607181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.607201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:69312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.920 [2024-11-19 16:18:21.607220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:69320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.920 [2024-11-19 16:18:21.607229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:69336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:69352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:69360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:69368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:69384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:69400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:69408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:69424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:69440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:69448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:69464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:69480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:69488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:69504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:69512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:69520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:69536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:69560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:69568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:69576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.921 [2024-11-19 16:18:21.607880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.921 [2024-11-19 16:18:21.607890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.607899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.607910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.607919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.607944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:69600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.607953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.607963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:69608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.607972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.607982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:69616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.607990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:69624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:69632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:69640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:69664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:69688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:69696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:69712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:69736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:69760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:69768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:69776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.922 [2024-11-19 16:18:21.608388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.608415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.922 [2024-11-19 16:18:21.609378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.609906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.922 [2024-11-19 16:18:21.610271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.610582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.922 [2024-11-19 16:18:21.611108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.611480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.922 [2024-11-19 16:18:21.611893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.612355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.922 [2024-11-19 16:18:21.612788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.613253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.922 [2024-11-19 16:18:21.613707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.614104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.922 [2024-11-19 16:18:21.614514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.614931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.922 [2024-11-19 16:18:21.615319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.615781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:69864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.922 [2024-11-19 16:18:21.615803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.922 [2024-11-19 16:18:21.615816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.922 [2024-11-19 16:18:21.615825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.615836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.923 [2024-11-19 16:18:21.615846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.615857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.923 [2024-11-19 16:18:21.615866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.615878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:69896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.923 [2024-11-19 16:18:21.615901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.615912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.923 [2024-11-19 16:18:21.615921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.615932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:14.923 [2024-11-19 16:18:21.615940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.615951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:14.923 [2024-11-19 16:18:21.615960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.615971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a9410 is same with the state(6) to be set 00:23:14.923 [2024-11-19 16:18:21.615983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:14.923 [2024-11-19 16:18:21.615991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:14.923 [2024-11-19 16:18:21.615999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69792 len:8 PRP1 0x0 PRP2 0x0 00:23:14.923 [2024-11-19 16:18:21.616008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.616132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.923 [2024-11-19 16:18:21.616150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.616160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.923 [2024-11-19 16:18:21.616168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.616178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.923 [2024-11-19 16:18:21.616187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.616198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.923 [2024-11-19 16:18:21.616207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.923 [2024-11-19 16:18:21.616215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977040 is same with the state(6) to be set 00:23:14.923 [2024-11-19 16:18:21.616435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:14.923 [2024-11-19 16:18:21.616458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x977040 (9): Bad file descriptor 00:23:14.923 [2024-11-19 16:18:21.616552] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:14.923 [2024-11-19 16:18:21.616573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x977040 with addr=10.0.0.3, port=4420 00:23:14.923 [2024-11-19 16:18:21.616584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977040 is same with the state(6) to be set 00:23:14.923 [2024-11-19 16:18:21.616601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x977040 (9): Bad file descriptor 00:23:14.923 [2024-11-19 16:18:21.616617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:14.923 [2024-11-19 16:18:21.616625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:14.923 [2024-11-19 16:18:21.616635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:14.923 [2024-11-19 16:18:21.616646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:14.923 [2024-11-19 16:18:21.616656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:15.182 16:18:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:16.009 4306.00 IOPS, 16.82 MiB/s [2024-11-19T16:18:22.724Z] [2024-11-19 16:18:22.616757] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.009 [2024-11-19 16:18:22.617121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x977040 with addr=10.0.0.3, port=4420 00:23:16.009 [2024-11-19 16:18:22.617562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977040 is same with the state(6) to be set 00:23:16.009 [2024-11-19 16:18:22.617970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x977040 (9): Bad file descriptor 00:23:16.009 [2024-11-19 16:18:22.618410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:16.009 [2024-11-19 16:18:22.618805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:16.009 [2024-11-19 16:18:22.619190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:16.009 [2024-11-19 16:18:22.619437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:16.009 [2024-11-19 16:18:22.619912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:16.945 2870.67 IOPS, 11.21 MiB/s [2024-11-19T16:18:23.660Z] [2024-11-19 16:18:23.620363] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:16.945 [2024-11-19 16:18:23.620733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x977040 with addr=10.0.0.3, port=4420 00:23:16.945 [2024-11-19 16:18:23.621112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977040 is same with the state(6) to be set 00:23:16.945 [2024-11-19 16:18:23.621146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x977040 (9): Bad file descriptor 00:23:16.945 [2024-11-19 16:18:23.621164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:16.945 [2024-11-19 16:18:23.621173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:16.945 [2024-11-19 16:18:23.621183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:16.945 [2024-11-19 16:18:23.621192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:16.945 [2024-11-19 16:18:23.621203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:18.140 2153.00 IOPS, 8.41 MiB/s [2024-11-19T16:18:24.855Z] [2024-11-19 16:18:24.621534] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.140 [2024-11-19 16:18:24.621596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x977040 with addr=10.0.0.3, port=4420 00:23:18.140 [2024-11-19 16:18:24.621611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977040 is same with the state(6) to be set 00:23:18.140 [2024-11-19 16:18:24.621835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x977040 (9): Bad file descriptor 00:23:18.140 [2024-11-19 16:18:24.622055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:23:18.140 [2024-11-19 16:18:24.622066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:23:18.140 [2024-11-19 16:18:24.622076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:18.140 [2024-11-19 16:18:24.622085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:23:18.140 [2024-11-19 16:18:24.622095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:18.140 16:18:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:18.397 [2024-11-19 16:18:24.891986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:18.397 16:18:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 97772 00:23:18.964 1722.40 IOPS, 6.73 MiB/s [2024-11-19T16:18:25.679Z] [2024-11-19 16:18:25.653606] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:23:20.836 2861.67 IOPS, 11.18 MiB/s [2024-11-19T16:18:28.487Z] 3946.57 IOPS, 15.42 MiB/s [2024-11-19T16:18:29.862Z] 4763.88 IOPS, 18.61 MiB/s [2024-11-19T16:18:30.798Z] 5402.00 IOPS, 21.10 MiB/s [2024-11-19T16:18:30.798Z] 5910.30 IOPS, 23.09 MiB/s 00:23:24.083 Latency(us) 00:23:24.083 [2024-11-19T16:18:30.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.083 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:24.083 Verification LBA range: start 0x0 length 0x4000 00:23:24.083 NVMe0n1 : 10.01 5915.60 23.11 4005.89 0.00 12874.71 647.91 3035150.89 00:23:24.083 [2024-11-19T16:18:30.798Z] =================================================================================================================== 00:23:24.083 [2024-11-19T16:18:30.798Z] Total : 5915.60 23.11 4005.89 0.00 12874.71 0.00 3035150.89 00:23:24.083 { 00:23:24.083 "results": [ 00:23:24.083 { 00:23:24.083 "job": "NVMe0n1", 00:23:24.083 "core_mask": "0x4", 00:23:24.083 "workload": "verify", 00:23:24.083 "status": "finished", 00:23:24.083 "verify_range": { 00:23:24.083 "start": 0, 00:23:24.083 "length": 16384 00:23:24.083 }, 00:23:24.083 "queue_depth": 128, 00:23:24.083 "io_size": 4096, 00:23:24.083 "runtime": 10.006759, 00:23:24.083 "iops": 5915.601644848247, 00:23:24.083 "mibps": 23.107818925188464, 00:23:24.083 "io_failed": 40086, 00:23:24.083 "io_timeout": 0, 00:23:24.083 "avg_latency_us": 12874.708206522833, 00:23:24.083 "min_latency_us": 647.9127272727272, 00:23:24.083 "max_latency_us": 3035150.8945454545 00:23:24.084 } 00:23:24.084 ], 00:23:24.084 "core_count": 1 00:23:24.084 } 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 97654 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97654 ']' 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97654 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97654 00:23:24.084 killing process with pid 97654 00:23:24.084 Received shutdown signal, test time was about 10.000000 seconds 00:23:24.084 00:23:24.084 Latency(us) 00:23:24.084 [2024-11-19T16:18:30.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.084 [2024-11-19T16:18:30.799Z] =================================================================================================================== 00:23:24.084 [2024-11-19T16:18:30.799Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97654' 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97654 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97654 00:23:24.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=97888 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 97888 /var/tmp/bdevperf.sock 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 97888 ']' 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:24.084 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:24.084 [2024-11-19 16:18:30.705260] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:23:24.084 [2024-11-19 16:18:30.705354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97888 ] 00:23:24.342 [2024-11-19 16:18:30.851780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.342 [2024-11-19 16:18:30.872096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.342 [2024-11-19 16:18:30.901032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:24.342 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:24.342 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:23:24.342 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=97891 00:23:24.342 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:24.342 16:18:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 97888 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:24.601 16:18:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:24.859 NVMe0n1 00:23:25.118 16:18:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:25.118 16:18:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=97931 00:23:25.118 16:18:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:25.118 Running I/O for 10 seconds... 00:23:26.052 16:18:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:26.314 16637.00 IOPS, 64.99 MiB/s [2024-11-19T16:18:33.029Z] [2024-11-19 16:18:32.843370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.843997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.844004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.314 [2024-11-19 16:18:32.844011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.844980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21295c0 is same with the state(6) to be set 00:23:26.315 [2024-11-19 16:18:32.845599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:114360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:88184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:87024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.315 [2024-11-19 16:18:32.845968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.315 [2024-11-19 16:18:32.845979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.845987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.845999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:24968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:125056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.316 [2024-11-19 16:18:32.846828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.316 [2024-11-19 16:18:32.846838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.846858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.846868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.846881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.846890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.846902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.846912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.846924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.846934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.846946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.846955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.846967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.846977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.846989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.846998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:34792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:31488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:86168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:35096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:26176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:112288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.317 [2024-11-19 16:18:32.847704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.317 [2024-11-19 16:18:32.847714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.847982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.847993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:26.318 [2024-11-19 16:18:32.848374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c6bca0 is same with the state(6) to be set 00:23:26.318 [2024-11-19 16:18:32.848397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:26.318 [2024-11-19 16:18:32.848408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:26.318 [2024-11-19 16:18:32.848417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18824 len:8 PRP1 0x0 PRP2 0x0 00:23:26.318 [2024-11-19 16:18:32.848426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.318 [2024-11-19 16:18:32.848603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.318 [2024-11-19 16:18:32.848614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.319 [2024-11-19 16:18:32.848623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.319 [2024-11-19 16:18:32.848634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.319 [2024-11-19 16:18:32.848643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.319 [2024-11-19 16:18:32.848653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:26.319 [2024-11-19 16:18:32.848662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:26.319 [2024-11-19 16:18:32.848671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d040 is same with the state(6) to be set 00:23:26.319 [2024-11-19 16:18:32.849295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:26.319 [2024-11-19 16:18:32.849326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4d040 (9): Bad file descriptor 00:23:26.319 [2024-11-19 16:18:32.849703] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:26.319 [2024-11-19 16:18:32.849739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4d040 with addr=10.0.0.3, port=4420 00:23:26.319 [2024-11-19 16:18:32.849753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d040 is same with the state(6) to be set 00:23:26.319 [2024-11-19 16:18:32.849775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4d040 (9): Bad file descriptor 00:23:26.319 [2024-11-19 16:18:32.849808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:26.319 [2024-11-19 16:18:32.849820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:26.319 [2024-11-19 16:18:32.849831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:26.319 [2024-11-19 16:18:32.849842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:26.319 [2024-11-19 16:18:32.849853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:26.319 16:18:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 97931 00:23:28.190 9526.00 IOPS, 37.21 MiB/s [2024-11-19T16:18:34.905Z] 6350.67 IOPS, 24.81 MiB/s [2024-11-19T16:18:34.905Z] [2024-11-19 16:18:34.849999] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.190 [2024-11-19 16:18:34.850401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4d040 with addr=10.0.0.3, port=4420 00:23:28.190 [2024-11-19 16:18:34.850860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d040 is same with the state(6) to be set 00:23:28.190 [2024-11-19 16:18:34.851269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4d040 (9): Bad file descriptor 00:23:28.190 [2024-11-19 16:18:34.851656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:28.190 [2024-11-19 16:18:34.852036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:28.190 [2024-11-19 16:18:34.852457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:28.190 [2024-11-19 16:18:34.852704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:28.190 [2024-11-19 16:18:34.853101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:30.062 4763.00 IOPS, 18.61 MiB/s [2024-11-19T16:18:37.036Z] 3810.40 IOPS, 14.88 MiB/s [2024-11-19T16:18:37.036Z] [2024-11-19 16:18:36.853468] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.321 [2024-11-19 16:18:36.853530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c4d040 with addr=10.0.0.3, port=4420 00:23:30.321 [2024-11-19 16:18:36.853547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c4d040 is same with the state(6) to be set 00:23:30.321 [2024-11-19 16:18:36.853568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4d040 (9): Bad file descriptor 00:23:30.321 [2024-11-19 16:18:36.853585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:30.321 [2024-11-19 16:18:36.853596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:30.321 [2024-11-19 16:18:36.853606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:30.321 [2024-11-19 16:18:36.853616] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:30.321 [2024-11-19 16:18:36.853626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:32.193 3175.33 IOPS, 12.40 MiB/s [2024-11-19T16:18:38.908Z] 2721.71 IOPS, 10.63 MiB/s [2024-11-19T16:18:38.908Z] [2024-11-19 16:18:38.853689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:32.193 [2024-11-19 16:18:38.854032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:23:32.193 [2024-11-19 16:18:38.854506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:23:32.193 [2024-11-19 16:18:38.854957] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:23:32.193 [2024-11-19 16:18:38.854981] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:23:33.387 2381.50 IOPS, 9.30 MiB/s 00:23:33.387 Latency(us) 00:23:33.387 [2024-11-19T16:18:40.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.387 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:33.387 NVMe0n1 : 8.17 2332.16 9.11 15.67 0.00 54421.33 7119.59 7015926.69 00:23:33.387 [2024-11-19T16:18:40.102Z] =================================================================================================================== 00:23:33.387 [2024-11-19T16:18:40.102Z] Total : 2332.16 9.11 15.67 0.00 54421.33 7119.59 7015926.69 00:23:33.387 { 00:23:33.387 "results": [ 00:23:33.387 { 00:23:33.387 "job": "NVMe0n1", 00:23:33.387 "core_mask": "0x4", 00:23:33.387 "workload": "randread", 00:23:33.387 "status": "finished", 00:23:33.387 "queue_depth": 128, 00:23:33.387 "io_size": 4096, 00:23:33.387 "runtime": 8.169252, 00:23:33.387 "iops": 2332.1596640671632, 00:23:33.387 "mibps": 9.109998687762356, 00:23:33.387 "io_failed": 128, 00:23:33.387 "io_timeout": 0, 00:23:33.387 "avg_latency_us": 54421.32830182956, 00:23:33.387 "min_latency_us": 7119.592727272728, 00:23:33.387 "max_latency_us": 7015926.69090909 00:23:33.387 } 00:23:33.387 ], 00:23:33.387 "core_count": 1 00:23:33.387 } 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.387 Attaching 5 probes... 00:23:33.387 1382.451131: reset bdev controller NVMe0 00:23:33.387 1382.635698: reconnect bdev controller NVMe0 00:23:33.387 3383.154714: reconnect delay bdev controller NVMe0 00:23:33.387 3383.189114: reconnect bdev controller NVMe0 00:23:33.387 5386.623184: reconnect delay bdev controller NVMe0 00:23:33.387 5386.655136: reconnect bdev controller NVMe0 00:23:33.387 7386.920844: reconnect delay bdev controller NVMe0 00:23:33.387 7386.953951: reconnect bdev controller NVMe0 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 97891 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 97888 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97888 ']' 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97888 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97888 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:33.387 killing process with pid 97888 00:23:33.387 Received shutdown signal, test time was about 8.240791 seconds 00:23:33.387 00:23:33.387 Latency(us) 00:23:33.387 [2024-11-19T16:18:40.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.387 [2024-11-19T16:18:40.102Z] =================================================================================================================== 00:23:33.387 [2024-11-19T16:18:40.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97888' 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97888 00:23:33.387 16:18:39 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97888 00:23:33.387 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.646 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:33.646 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:33.646 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.646 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.904 rmmod nvme_tcp 00:23:33.904 rmmod nvme_fabrics 00:23:33.904 rmmod nvme_keyring 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 97467 ']' 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 97467 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 97467 ']' 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 97467 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97467 00:23:33.904 killing process with pid 97467 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97467' 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 97467 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 97467 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:33.904 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.905 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:23:33.905 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.905 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.905 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:33.905 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:34.163 ************************************ 00:23:34.163 END TEST nvmf_timeout 00:23:34.163 ************************************ 00:23:34.163 00:23:34.163 real 0m44.990s 00:23:34.163 user 2m10.876s 00:23:34.163 sys 0m5.557s 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.163 16:18:40 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 16:18:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:34.422 16:18:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:34.422 ************************************ 00:23:34.422 END TEST nvmf_host 00:23:34.422 ************************************ 00:23:34.422 00:23:34.422 real 5m38.796s 00:23:34.422 user 15m52.072s 00:23:34.422 sys 1m16.265s 00:23:34.422 16:18:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.422 16:18:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 16:18:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:34.422 16:18:40 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:23:34.422 ************************************ 00:23:34.422 END TEST nvmf_tcp 00:23:34.422 ************************************ 00:23:34.422 00:23:34.422 real 14m56.606s 00:23:34.422 user 39m12.225s 00:23:34.422 sys 4m7.738s 00:23:34.422 16:18:40 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:34.422 16:18:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:34.422 16:18:40 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:23:34.422 16:18:40 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:34.422 16:18:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:34.423 16:18:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:34.423 16:18:40 -- common/autotest_common.sh@10 -- # set +x 00:23:34.423 ************************************ 00:23:34.423 START TEST nvmf_dif 00:23:34.423 ************************************ 00:23:34.423 16:18:40 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:34.423 * Looking for test storage... 00:23:34.423 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:34.423 16:18:41 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:34.423 16:18:41 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:23:34.423 16:18:41 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:34.682 16:18:41 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:23:34.682 16:18:41 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:34.682 16:18:41 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.682 --rc genhtml_branch_coverage=1 00:23:34.682 --rc genhtml_function_coverage=1 00:23:34.682 --rc genhtml_legend=1 00:23:34.682 --rc geninfo_all_blocks=1 00:23:34.682 --rc geninfo_unexecuted_blocks=1 00:23:34.682 00:23:34.682 ' 00:23:34.682 16:18:41 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.682 --rc genhtml_branch_coverage=1 00:23:34.682 --rc genhtml_function_coverage=1 00:23:34.682 --rc genhtml_legend=1 00:23:34.682 --rc geninfo_all_blocks=1 00:23:34.682 --rc geninfo_unexecuted_blocks=1 00:23:34.682 00:23:34.682 ' 00:23:34.682 16:18:41 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.682 --rc genhtml_branch_coverage=1 00:23:34.682 --rc genhtml_function_coverage=1 00:23:34.682 --rc genhtml_legend=1 00:23:34.682 --rc geninfo_all_blocks=1 00:23:34.682 --rc geninfo_unexecuted_blocks=1 00:23:34.682 00:23:34.682 ' 00:23:34.682 16:18:41 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:34.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.682 --rc genhtml_branch_coverage=1 00:23:34.682 --rc genhtml_function_coverage=1 00:23:34.682 --rc genhtml_legend=1 00:23:34.682 --rc geninfo_all_blocks=1 00:23:34.682 --rc geninfo_unexecuted_blocks=1 00:23:34.682 00:23:34.682 ' 00:23:34.682 16:18:41 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:34.682 16:18:41 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:34.682 16:18:41 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.682 16:18:41 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.682 16:18:41 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.682 16:18:41 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:34.682 16:18:41 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:34.682 16:18:41 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:34.683 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:34.683 16:18:41 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:34.683 16:18:41 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:34.683 16:18:41 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:34.683 16:18:41 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:34.683 16:18:41 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.683 16:18:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:34.683 16:18:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:34.683 Cannot find device "nvmf_init_br" 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:34.683 Cannot find device "nvmf_init_br2" 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:34.683 Cannot find device "nvmf_tgt_br" 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@164 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:34.683 Cannot find device "nvmf_tgt_br2" 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@165 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:34.683 Cannot find device "nvmf_init_br" 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@166 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:34.683 Cannot find device "nvmf_init_br2" 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@167 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:34.683 Cannot find device "nvmf_tgt_br" 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@168 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:34.683 Cannot find device "nvmf_tgt_br2" 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@169 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:34.683 Cannot find device "nvmf_br" 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@170 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:34.683 Cannot find device "nvmf_init_if" 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@171 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:34.683 Cannot find device "nvmf_init_if2" 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@172 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:34.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@173 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:34.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@174 -- # true 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:34.683 16:18:41 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:34.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:34.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:23:34.942 00:23:34.942 --- 10.0.0.3 ping statistics --- 00:23:34.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.942 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:34.942 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:34.942 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:23:34.942 00:23:34.942 --- 10.0.0.4 ping statistics --- 00:23:34.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.942 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:34.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:34.942 00:23:34.942 --- 10.0.0.1 ping statistics --- 00:23:34.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.942 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:34.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:23:34.942 00:23:34.942 --- 10.0.0.2 ping statistics --- 00:23:34.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.942 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.942 16:18:41 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:23:34.943 16:18:41 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:23:34.943 16:18:41 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:35.202 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:35.461 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:35.461 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:35.461 16:18:41 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.461 16:18:41 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:35.461 16:18:41 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:35.461 16:18:41 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.461 16:18:41 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:35.461 16:18:41 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:35.461 16:18:41 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:35.461 16:18:41 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:35.461 16:18:41 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:35.461 16:18:41 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:35.461 16:18:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:35.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.461 16:18:41 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=98425 00:23:35.461 16:18:41 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:35.461 16:18:41 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 98425 00:23:35.461 16:18:41 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 98425 ']' 00:23:35.461 16:18:41 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.461 16:18:41 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.461 16:18:41 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.461 16:18:42 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.461 16:18:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:35.461 [2024-11-19 16:18:42.063277] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:23:35.461 [2024-11-19 16:18:42.063556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.719 [2024-11-19 16:18:42.219136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.719 [2024-11-19 16:18:42.243047] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.719 [2024-11-19 16:18:42.243343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.719 [2024-11-19 16:18:42.243551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.719 [2024-11-19 16:18:42.243738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.719 [2024-11-19 16:18:42.243783] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.719 [2024-11-19 16:18:42.244324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.719 [2024-11-19 16:18:42.280335] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:35.719 16:18:42 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.719 16:18:42 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:23:35.719 16:18:42 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:35.719 16:18:42 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:35.719 16:18:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:35.719 16:18:42 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.719 16:18:42 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:35.719 16:18:42 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:35.719 16:18:42 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.719 16:18:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:35.719 [2024-11-19 16:18:42.380114] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.719 16:18:42 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.719 16:18:42 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:35.719 16:18:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:35.719 16:18:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.719 16:18:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:35.719 ************************************ 00:23:35.719 START TEST fio_dif_1_default 00:23:35.719 ************************************ 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:35.719 bdev_null0 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.719 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:35.719 [2024-11-19 16:18:42.428292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:35.978 { 00:23:35.978 "params": { 00:23:35.978 "name": "Nvme$subsystem", 00:23:35.978 "trtype": "$TEST_TRANSPORT", 00:23:35.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:35.978 "adrfam": "ipv4", 00:23:35.978 "trsvcid": "$NVMF_PORT", 00:23:35.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:35.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:35.978 "hdgst": ${hdgst:-false}, 00:23:35.978 "ddgst": ${ddgst:-false} 00:23:35.978 }, 00:23:35.978 "method": "bdev_nvme_attach_controller" 00:23:35.978 } 00:23:35.978 EOF 00:23:35.978 )") 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:35.978 "params": { 00:23:35.978 "name": "Nvme0", 00:23:35.978 "trtype": "tcp", 00:23:35.978 "traddr": "10.0.0.3", 00:23:35.978 "adrfam": "ipv4", 00:23:35.978 "trsvcid": "4420", 00:23:35.978 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:35.978 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:35.978 "hdgst": false, 00:23:35.978 "ddgst": false 00:23:35.978 }, 00:23:35.978 "method": "bdev_nvme_attach_controller" 00:23:35.978 }' 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:35.978 16:18:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:35.978 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:35.978 fio-3.35 00:23:35.978 Starting 1 thread 00:23:48.184 00:23:48.184 filename0: (groupid=0, jobs=1): err= 0: pid=98484: Tue Nov 19 16:18:53 2024 00:23:48.184 read: IOPS=9426, BW=36.8MiB/s (38.6MB/s)(368MiB/10001msec) 00:23:48.184 slat (usec): min=5, max=1118, avg= 7.95, stdev= 4.93 00:23:48.184 clat (usec): min=314, max=4214, avg=400.93, stdev=50.83 00:23:48.184 lat (usec): min=320, max=4234, avg=408.88, stdev=51.65 00:23:48.184 clat percentiles (usec): 00:23:48.184 | 1.00th=[ 334], 5.00th=[ 347], 10.00th=[ 355], 20.00th=[ 363], 00:23:48.184 | 30.00th=[ 375], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 404], 00:23:48.184 | 70.00th=[ 416], 80.00th=[ 433], 90.00th=[ 461], 95.00th=[ 486], 00:23:48.184 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 603], 99.95th=[ 627], 00:23:48.184 | 99.99th=[ 1090] 00:23:48.184 bw ( KiB/s): min=36000, max=38624, per=100.00%, avg=37716.21, stdev=661.78, samples=19 00:23:48.184 iops : min= 9000, max= 9656, avg=9429.05, stdev=165.45, samples=19 00:23:48.184 lat (usec) : 500=97.00%, 750=2.97%, 1000=0.01% 00:23:48.184 lat (msec) : 2=0.01%, 10=0.01% 00:23:48.184 cpu : usr=85.32%, sys=12.77%, ctx=15, majf=0, minf=0 00:23:48.184 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:48.184 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.184 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:48.184 issued rwts: total=94276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:48.184 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:48.184 00:23:48.184 Run status group 0 (all jobs): 00:23:48.184 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=368MiB (386MB), run=10001-10001msec 00:23:48.184 16:18:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:48.184 16:18:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.185 00:23:48.185 real 0m10.909s 00:23:48.185 user 0m9.110s 00:23:48.185 sys 0m1.529s 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 ************************************ 00:23:48.185 END TEST fio_dif_1_default 00:23:48.185 ************************************ 00:23:48.185 16:18:53 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:48.185 16:18:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:48.185 16:18:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 ************************************ 00:23:48.185 START TEST fio_dif_1_multi_subsystems 00:23:48.185 ************************************ 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 bdev_null0 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 [2024-11-19 16:18:53.388480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 bdev_null1 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.185 { 00:23:48.185 "params": { 00:23:48.185 "name": "Nvme$subsystem", 00:23:48.185 "trtype": "$TEST_TRANSPORT", 00:23:48.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.185 "adrfam": "ipv4", 00:23:48.185 "trsvcid": "$NVMF_PORT", 00:23:48.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.185 "hdgst": ${hdgst:-false}, 00:23:48.185 "ddgst": ${ddgst:-false} 00:23:48.185 }, 00:23:48.185 "method": "bdev_nvme_attach_controller" 00:23:48.185 } 00:23:48.185 EOF 00:23:48.185 )") 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:48.185 { 00:23:48.185 "params": { 00:23:48.185 "name": "Nvme$subsystem", 00:23:48.185 "trtype": "$TEST_TRANSPORT", 00:23:48.185 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.185 "adrfam": "ipv4", 00:23:48.185 "trsvcid": "$NVMF_PORT", 00:23:48.185 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.185 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.185 "hdgst": ${hdgst:-false}, 00:23:48.185 "ddgst": ${ddgst:-false} 00:23:48.185 }, 00:23:48.185 "method": "bdev_nvme_attach_controller" 00:23:48.185 } 00:23:48.185 EOF 00:23:48.185 )") 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:23:48.185 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:23:48.186 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:48.186 "params": { 00:23:48.186 "name": "Nvme0", 00:23:48.186 "trtype": "tcp", 00:23:48.186 "traddr": "10.0.0.3", 00:23:48.186 "adrfam": "ipv4", 00:23:48.186 "trsvcid": "4420", 00:23:48.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:48.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:48.186 "hdgst": false, 00:23:48.186 "ddgst": false 00:23:48.186 }, 00:23:48.186 "method": "bdev_nvme_attach_controller" 00:23:48.186 },{ 00:23:48.186 "params": { 00:23:48.186 "name": "Nvme1", 00:23:48.186 "trtype": "tcp", 00:23:48.186 "traddr": "10.0.0.3", 00:23:48.186 "adrfam": "ipv4", 00:23:48.186 "trsvcid": "4420", 00:23:48.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.186 "hdgst": false, 00:23:48.186 "ddgst": false 00:23:48.186 }, 00:23:48.186 "method": "bdev_nvme_attach_controller" 00:23:48.186 }' 00:23:48.186 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:48.186 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:48.186 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:48.186 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:48.186 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:48.186 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:48.186 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:48.186 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:48.186 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:48.186 16:18:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:48.186 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:48.186 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:48.186 fio-3.35 00:23:48.186 Starting 2 threads 00:23:58.164 00:23:58.164 filename0: (groupid=0, jobs=1): err= 0: pid=98638: Tue Nov 19 16:19:04 2024 00:23:58.164 read: IOPS=5117, BW=20.0MiB/s (21.0MB/s)(200MiB/10001msec) 00:23:58.164 slat (nsec): min=6325, max=61106, avg=13076.72, stdev=4863.29 00:23:58.164 clat (usec): min=607, max=1407, avg=745.10, stdev=62.62 00:23:58.164 lat (usec): min=619, max=1433, avg=758.18, stdev=63.44 00:23:58.164 clat percentiles (usec): 00:23:58.164 | 1.00th=[ 644], 5.00th=[ 668], 10.00th=[ 676], 20.00th=[ 693], 00:23:58.164 | 30.00th=[ 709], 40.00th=[ 717], 50.00th=[ 734], 60.00th=[ 750], 00:23:58.164 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 832], 95.00th=[ 865], 00:23:58.164 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 1020], 99.95th=[ 1106], 00:23:58.164 | 99.99th=[ 1369] 00:23:58.164 bw ( KiB/s): min=19968, max=20960, per=49.92%, avg=20437.89, stdev=292.71, samples=19 00:23:58.164 iops : min= 4992, max= 5240, avg=5109.47, stdev=73.18, samples=19 00:23:58.164 lat (usec) : 750=60.66%, 1000=39.18% 00:23:58.164 lat (msec) : 2=0.15% 00:23:58.164 cpu : usr=89.64%, sys=8.92%, ctx=20, majf=0, minf=0 00:23:58.164 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.164 issued rwts: total=51176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.164 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:58.164 filename1: (groupid=0, jobs=1): err= 0: pid=98639: Tue Nov 19 16:19:04 2024 00:23:58.164 read: IOPS=5117, BW=20.0MiB/s (21.0MB/s)(200MiB/10001msec) 00:23:58.164 slat (nsec): min=6384, max=72611, avg=12817.66, stdev=4839.38 00:23:58.164 clat (usec): min=570, max=1467, avg=747.19, stdev=66.86 00:23:58.164 lat (usec): min=577, max=1492, avg=760.01, stdev=67.80 00:23:58.164 clat percentiles (usec): 00:23:58.164 | 1.00th=[ 619], 5.00th=[ 652], 10.00th=[ 676], 20.00th=[ 693], 00:23:58.164 | 30.00th=[ 709], 40.00th=[ 725], 50.00th=[ 734], 60.00th=[ 750], 00:23:58.164 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 840], 95.00th=[ 873], 00:23:58.164 | 99.00th=[ 938], 99.50th=[ 963], 99.90th=[ 1029], 99.95th=[ 1172], 00:23:58.164 | 99.99th=[ 1385] 00:23:58.164 bw ( KiB/s): min=19968, max=20960, per=49.92%, avg=20437.89, stdev=292.71, samples=19 00:23:58.164 iops : min= 4992, max= 5240, avg=5109.47, stdev=73.18, samples=19 00:23:58.164 lat (usec) : 750=58.07%, 1000=41.75% 00:23:58.164 lat (msec) : 2=0.17% 00:23:58.164 cpu : usr=89.22%, sys=9.34%, ctx=103, majf=0, minf=0 00:23:58.164 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:58.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:58.164 issued rwts: total=51176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:58.164 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:58.164 00:23:58.164 Run status group 0 (all jobs): 00:23:58.164 READ: bw=40.0MiB/s (41.9MB/s), 20.0MiB/s-20.0MiB/s (21.0MB/s-21.0MB/s), io=400MiB (419MB), run=10001-10001msec 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.164 00:23:58.164 real 0m10.999s 00:23:58.164 user 0m18.563s 00:23:58.164 sys 0m2.061s 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.164 16:19:04 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:58.164 ************************************ 00:23:58.164 END TEST fio_dif_1_multi_subsystems 00:23:58.164 ************************************ 00:23:58.164 16:19:04 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:58.164 16:19:04 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:58.164 16:19:04 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.165 16:19:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:58.165 ************************************ 00:23:58.165 START TEST fio_dif_rand_params 00:23:58.165 ************************************ 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.165 bdev_null0 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:58.165 [2024-11-19 16:19:04.442667] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:58.165 { 00:23:58.165 "params": { 00:23:58.165 "name": "Nvme$subsystem", 00:23:58.165 "trtype": "$TEST_TRANSPORT", 00:23:58.165 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.165 "adrfam": "ipv4", 00:23:58.165 "trsvcid": "$NVMF_PORT", 00:23:58.165 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.165 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.165 "hdgst": ${hdgst:-false}, 00:23:58.165 "ddgst": ${ddgst:-false} 00:23:58.165 }, 00:23:58.165 "method": "bdev_nvme_attach_controller" 00:23:58.165 } 00:23:58.165 EOF 00:23:58.165 )") 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:58.165 "params": { 00:23:58.165 "name": "Nvme0", 00:23:58.165 "trtype": "tcp", 00:23:58.165 "traddr": "10.0.0.3", 00:23:58.165 "adrfam": "ipv4", 00:23:58.165 "trsvcid": "4420", 00:23:58.165 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:58.165 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:58.165 "hdgst": false, 00:23:58.165 "ddgst": false 00:23:58.165 }, 00:23:58.165 "method": "bdev_nvme_attach_controller" 00:23:58.165 }' 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:58.165 16:19:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:58.165 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:58.165 ... 00:23:58.165 fio-3.35 00:23:58.165 Starting 3 threads 00:24:03.508 00:24:03.508 filename0: (groupid=0, jobs=1): err= 0: pid=98795: Tue Nov 19 16:19:10 2024 00:24:03.508 read: IOPS=274, BW=34.3MiB/s (36.0MB/s)(172MiB/5006msec) 00:24:03.508 slat (nsec): min=6883, max=53035, avg=15102.35, stdev=4586.84 00:24:03.508 clat (usec): min=10179, max=12365, avg=10896.18, stdev=342.71 00:24:03.508 lat (usec): min=10191, max=12378, avg=10911.28, stdev=343.16 00:24:03.508 clat percentiles (usec): 00:24:03.508 | 1.00th=[10290], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:24:03.508 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:24:03.508 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11338], 95.00th=[11600], 00:24:03.508 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12387], 99.95th=[12387], 00:24:03.508 | 99.99th=[12387] 00:24:03.508 bw ( KiB/s): min=33792, max=36096, per=33.31%, avg=35097.60, stdev=728.59, samples=10 00:24:03.508 iops : min= 264, max= 282, avg=274.20, stdev= 5.69, samples=10 00:24:03.508 lat (msec) : 20=100.00% 00:24:03.508 cpu : usr=91.39%, sys=8.09%, ctx=13, majf=0, minf=0 00:24:03.508 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.508 issued rwts: total=1374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.508 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:03.508 filename0: (groupid=0, jobs=1): err= 0: pid=98796: Tue Nov 19 16:19:10 2024 00:24:03.508 read: IOPS=274, BW=34.3MiB/s (36.0MB/s)(172MiB/5011msec) 00:24:03.508 slat (nsec): min=6720, max=55434, avg=14364.23, stdev=5271.63 00:24:03.508 clat (usec): min=5223, max=12337, avg=10884.98, stdev=431.46 00:24:03.508 lat (usec): min=5230, max=12370, avg=10899.34, stdev=431.58 00:24:03.508 clat percentiles (usec): 00:24:03.508 | 1.00th=[10290], 5.00th=[10552], 10.00th=[10552], 20.00th=[10683], 00:24:03.508 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:24:03.508 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11338], 95.00th=[11600], 00:24:03.508 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12387], 99.95th=[12387], 00:24:03.508 | 99.99th=[12387] 00:24:03.508 bw ( KiB/s): min=34560, max=36096, per=33.38%, avg=35174.40, stdev=705.74, samples=10 00:24:03.508 iops : min= 270, max= 282, avg=274.80, stdev= 5.51, samples=10 00:24:03.508 lat (msec) : 10=0.22%, 20=99.78% 00:24:03.508 cpu : usr=91.00%, sys=8.48%, ctx=9, majf=0, minf=0 00:24:03.508 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.508 issued rwts: total=1377,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.508 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:03.508 filename0: (groupid=0, jobs=1): err= 0: pid=98797: Tue Nov 19 16:19:10 2024 00:24:03.508 read: IOPS=274, BW=34.3MiB/s (36.0MB/s)(172MiB/5006msec) 00:24:03.508 slat (nsec): min=6916, max=53987, avg=15141.40, stdev=4921.63 00:24:03.508 clat (usec): min=10174, max=12346, avg=10895.04, stdev=343.54 00:24:03.508 lat (usec): min=10187, max=12376, avg=10910.18, stdev=343.92 00:24:03.508 clat percentiles (usec): 00:24:03.508 | 1.00th=[10290], 5.00th=[10421], 10.00th=[10552], 20.00th=[10683], 00:24:03.508 | 30.00th=[10683], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:24:03.508 | 70.00th=[10945], 80.00th=[11076], 90.00th=[11338], 95.00th=[11600], 00:24:03.508 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12387], 99.95th=[12387], 00:24:03.508 | 99.99th=[12387] 00:24:03.508 bw ( KiB/s): min=33792, max=36096, per=33.31%, avg=35097.60, stdev=728.59, samples=10 00:24:03.508 iops : min= 264, max= 282, avg=274.20, stdev= 5.69, samples=10 00:24:03.508 lat (msec) : 20=100.00% 00:24:03.508 cpu : usr=91.39%, sys=8.05%, ctx=8, majf=0, minf=0 00:24:03.508 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.509 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.509 issued rwts: total=1374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.509 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:03.509 00:24:03.509 Run status group 0 (all jobs): 00:24:03.509 READ: bw=103MiB/s (108MB/s), 34.3MiB/s-34.3MiB/s (36.0MB/s-36.0MB/s), io=516MiB (541MB), run=5006-5011msec 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:03.767 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 bdev_null0 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 [2024-11-19 16:19:10.309763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 bdev_null1 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 bdev_null2 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.768 { 00:24:03.768 "params": { 00:24:03.768 "name": "Nvme$subsystem", 00:24:03.768 "trtype": "$TEST_TRANSPORT", 00:24:03.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.768 "adrfam": "ipv4", 00:24:03.768 "trsvcid": "$NVMF_PORT", 00:24:03.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.768 "hdgst": ${hdgst:-false}, 00:24:03.768 "ddgst": ${ddgst:-false} 00:24:03.768 }, 00:24:03.768 "method": "bdev_nvme_attach_controller" 00:24:03.768 } 00:24:03.768 EOF 00:24:03.768 )") 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.768 { 00:24:03.768 "params": { 00:24:03.768 "name": "Nvme$subsystem", 00:24:03.768 "trtype": "$TEST_TRANSPORT", 00:24:03.768 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.768 "adrfam": "ipv4", 00:24:03.768 "trsvcid": "$NVMF_PORT", 00:24:03.768 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.768 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.768 "hdgst": ${hdgst:-false}, 00:24:03.768 "ddgst": ${ddgst:-false} 00:24:03.768 }, 00:24:03.768 "method": "bdev_nvme_attach_controller" 00:24:03.768 } 00:24:03.768 EOF 00:24:03.768 )") 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:03.768 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:03.769 { 00:24:03.769 "params": { 00:24:03.769 "name": "Nvme$subsystem", 00:24:03.769 "trtype": "$TEST_TRANSPORT", 00:24:03.769 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.769 "adrfam": "ipv4", 00:24:03.769 "trsvcid": "$NVMF_PORT", 00:24:03.769 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.769 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.769 "hdgst": ${hdgst:-false}, 00:24:03.769 "ddgst": ${ddgst:-false} 00:24:03.769 }, 00:24:03.769 "method": "bdev_nvme_attach_controller" 00:24:03.769 } 00:24:03.769 EOF 00:24:03.769 )") 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:03.769 "params": { 00:24:03.769 "name": "Nvme0", 00:24:03.769 "trtype": "tcp", 00:24:03.769 "traddr": "10.0.0.3", 00:24:03.769 "adrfam": "ipv4", 00:24:03.769 "trsvcid": "4420", 00:24:03.769 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:03.769 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:03.769 "hdgst": false, 00:24:03.769 "ddgst": false 00:24:03.769 }, 00:24:03.769 "method": "bdev_nvme_attach_controller" 00:24:03.769 },{ 00:24:03.769 "params": { 00:24:03.769 "name": "Nvme1", 00:24:03.769 "trtype": "tcp", 00:24:03.769 "traddr": "10.0.0.3", 00:24:03.769 "adrfam": "ipv4", 00:24:03.769 "trsvcid": "4420", 00:24:03.769 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.769 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.769 "hdgst": false, 00:24:03.769 "ddgst": false 00:24:03.769 }, 00:24:03.769 "method": "bdev_nvme_attach_controller" 00:24:03.769 },{ 00:24:03.769 "params": { 00:24:03.769 "name": "Nvme2", 00:24:03.769 "trtype": "tcp", 00:24:03.769 "traddr": "10.0.0.3", 00:24:03.769 "adrfam": "ipv4", 00:24:03.769 "trsvcid": "4420", 00:24:03.769 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:03.769 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:03.769 "hdgst": false, 00:24:03.769 "ddgst": false 00:24:03.769 }, 00:24:03.769 "method": "bdev_nvme_attach_controller" 00:24:03.769 }' 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:03.769 16:19:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:04.029 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:04.029 ... 00:24:04.029 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:04.029 ... 00:24:04.029 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:04.029 ... 00:24:04.029 fio-3.35 00:24:04.029 Starting 24 threads 00:24:16.235 00:24:16.235 filename0: (groupid=0, jobs=1): err= 0: pid=98892: Tue Nov 19 16:19:21 2024 00:24:16.235 read: IOPS=223, BW=894KiB/s (915kB/s)(8976KiB/10042msec) 00:24:16.235 slat (usec): min=4, max=8052, avg=22.35, stdev=239.73 00:24:16.235 clat (msec): min=11, max=128, avg=71.42, stdev=24.86 00:24:16.235 lat (msec): min=11, max=128, avg=71.44, stdev=24.86 00:24:16.235 clat percentiles (msec): 00:24:16.235 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 38], 20.00th=[ 48], 00:24:16.235 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:24:16.235 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 120], 00:24:16.235 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 129], 99.95th=[ 129], 00:24:16.235 | 99.99th=[ 129] 00:24:16.235 bw ( KiB/s): min= 664, max= 1784, per=4.28%, avg=891.25, stdev=255.13, samples=20 00:24:16.235 iops : min= 166, max= 446, avg=222.80, stdev=63.79, samples=20 00:24:16.235 lat (msec) : 20=0.18%, 50=25.80%, 100=57.98%, 250=16.04% 00:24:16.235 cpu : usr=31.18%, sys=1.97%, ctx=877, majf=0, minf=9 00:24:16.235 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.8%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:16.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.235 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.235 issued rwts: total=2244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.235 filename0: (groupid=0, jobs=1): err= 0: pid=98893: Tue Nov 19 16:19:21 2024 00:24:16.235 read: IOPS=217, BW=871KiB/s (892kB/s)(8752KiB/10051msec) 00:24:16.235 slat (usec): min=6, max=8023, avg=18.32, stdev=171.28 00:24:16.235 clat (msec): min=21, max=155, avg=73.34, stdev=26.19 00:24:16.235 lat (msec): min=21, max=155, avg=73.36, stdev=26.19 00:24:16.235 clat percentiles (msec): 00:24:16.235 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 36], 20.00th=[ 48], 00:24:16.236 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 78], 00:24:16.236 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 110], 95.00th=[ 121], 00:24:16.236 | 99.00th=[ 122], 99.50th=[ 126], 99.90th=[ 144], 99.95th=[ 144], 00:24:16.236 | 99.99th=[ 157] 00:24:16.236 bw ( KiB/s): min= 608, max= 1868, per=4.17%, avg=868.35, stdev=274.56, samples=20 00:24:16.236 iops : min= 152, max= 467, avg=217.05, stdev=68.61, samples=20 00:24:16.236 lat (msec) : 50=24.63%, 100=56.35%, 250=19.01% 00:24:16.236 cpu : usr=31.30%, sys=1.86%, ctx=884, majf=0, minf=9 00:24:16.236 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=81.9%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:16.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 issued rwts: total=2188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.236 filename0: (groupid=0, jobs=1): err= 0: pid=98894: Tue Nov 19 16:19:21 2024 00:24:16.236 read: IOPS=210, BW=842KiB/s (862kB/s)(8456KiB/10046msec) 00:24:16.236 slat (usec): min=7, max=8037, avg=29.26, stdev=325.93 00:24:16.236 clat (msec): min=19, max=147, avg=75.79, stdev=24.29 00:24:16.236 lat (msec): min=19, max=147, avg=75.82, stdev=24.30 00:24:16.236 clat percentiles (msec): 00:24:16.236 | 1.00th=[ 25], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 52], 00:24:16.236 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 79], 00:24:16.236 | 70.00th=[ 85], 80.00th=[ 100], 90.00th=[ 112], 95.00th=[ 120], 00:24:16.236 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 144], 99.95th=[ 144], 00:24:16.236 | 99.99th=[ 148] 00:24:16.236 bw ( KiB/s): min= 608, max= 1418, per=4.04%, avg=841.05, stdev=189.66, samples=20 00:24:16.236 iops : min= 152, max= 354, avg=210.20, stdev=47.28, samples=20 00:24:16.236 lat (msec) : 20=0.09%, 50=18.92%, 100=61.40%, 250=19.58% 00:24:16.236 cpu : usr=36.83%, sys=2.44%, ctx=1156, majf=0, minf=9 00:24:16.236 IO depths : 1=0.1%, 2=0.7%, 4=2.4%, 8=80.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:16.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 complete : 0=0.0%, 4=88.0%, 8=11.5%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 issued rwts: total=2114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.236 filename0: (groupid=0, jobs=1): err= 0: pid=98895: Tue Nov 19 16:19:21 2024 00:24:16.236 read: IOPS=207, BW=829KiB/s (849kB/s)(8332KiB/10055msec) 00:24:16.236 slat (usec): min=7, max=8049, avg=25.37, stdev=277.92 00:24:16.236 clat (msec): min=12, max=155, avg=76.93, stdev=27.72 00:24:16.236 lat (msec): min=12, max=155, avg=76.96, stdev=27.72 00:24:16.236 clat percentiles (msec): 00:24:16.236 | 1.00th=[ 14], 5.00th=[ 28], 10.00th=[ 34], 20.00th=[ 52], 00:24:16.236 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 84], 00:24:16.236 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 111], 95.00th=[ 121], 00:24:16.236 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 146], 99.95th=[ 155], 00:24:16.236 | 99.99th=[ 157] 00:24:16.236 bw ( KiB/s): min= 576, max= 2048, per=3.97%, avg=828.90, stdev=322.15, samples=20 00:24:16.236 iops : min= 144, max= 512, avg=207.20, stdev=80.53, samples=20 00:24:16.236 lat (msec) : 20=2.21%, 50=17.33%, 100=58.47%, 250=21.99% 00:24:16.236 cpu : usr=36.82%, sys=2.33%, ctx=957, majf=0, minf=9 00:24:16.236 IO depths : 1=0.1%, 2=1.9%, 4=7.6%, 8=74.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:16.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 complete : 0=0.0%, 4=89.7%, 8=8.7%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 issued rwts: total=2083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.236 filename0: (groupid=0, jobs=1): err= 0: pid=98896: Tue Nov 19 16:19:21 2024 00:24:16.236 read: IOPS=225, BW=903KiB/s (925kB/s)(9072KiB/10046msec) 00:24:16.236 slat (usec): min=4, max=8032, avg=21.62, stdev=206.22 00:24:16.236 clat (msec): min=11, max=124, avg=70.71, stdev=26.68 00:24:16.236 lat (msec): min=11, max=124, avg=70.73, stdev=26.68 00:24:16.236 clat percentiles (msec): 00:24:16.236 | 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 48], 00:24:16.236 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 77], 00:24:16.236 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 111], 95.00th=[ 117], 00:24:16.236 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 125], 00:24:16.236 | 99.99th=[ 125] 00:24:16.236 bw ( KiB/s): min= 592, max= 2192, per=4.32%, avg=900.50, stdev=336.04, samples=20 00:24:16.236 iops : min= 148, max= 548, avg=225.10, stdev=84.01, samples=20 00:24:16.236 lat (msec) : 20=2.34%, 50=22.35%, 100=58.29%, 250=17.02% 00:24:16.236 cpu : usr=37.75%, sys=2.21%, ctx=1399, majf=0, minf=9 00:24:16.236 IO depths : 1=0.1%, 2=0.1%, 4=0.4%, 8=83.3%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:16.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 complete : 0=0.0%, 4=87.1%, 8=12.8%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.236 filename0: (groupid=0, jobs=1): err= 0: pid=98897: Tue Nov 19 16:19:21 2024 00:24:16.236 read: IOPS=218, BW=874KiB/s (895kB/s)(8760KiB/10028msec) 00:24:16.236 slat (usec): min=8, max=8023, avg=30.13, stdev=319.96 00:24:16.236 clat (msec): min=19, max=131, avg=73.05, stdev=23.98 00:24:16.236 lat (msec): min=19, max=131, avg=73.08, stdev=23.98 00:24:16.236 clat percentiles (msec): 00:24:16.236 | 1.00th=[ 32], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 48], 00:24:16.236 | 30.00th=[ 58], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:24:16.236 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 118], 00:24:16.236 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 127], 99.95th=[ 127], 00:24:16.236 | 99.99th=[ 132] 00:24:16.236 bw ( KiB/s): min= 638, max= 1496, per=4.19%, avg=872.30, stdev=203.51, samples=20 00:24:16.236 iops : min= 159, max= 374, avg=218.05, stdev=50.91, samples=20 00:24:16.236 lat (msec) : 20=0.18%, 50=24.16%, 100=59.22%, 250=16.44% 00:24:16.236 cpu : usr=37.82%, sys=2.23%, ctx=1064, majf=0, minf=9 00:24:16.236 IO depths : 1=0.2%, 2=0.5%, 4=1.3%, 8=82.4%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:16.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 complete : 0=0.0%, 4=87.3%, 8=12.4%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 issued rwts: total=2190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.236 filename0: (groupid=0, jobs=1): err= 0: pid=98898: Tue Nov 19 16:19:21 2024 00:24:16.236 read: IOPS=223, BW=894KiB/s (916kB/s)(8960KiB/10020msec) 00:24:16.236 slat (usec): min=3, max=8037, avg=31.18, stdev=348.92 00:24:16.236 clat (msec): min=12, max=132, avg=71.40, stdev=25.50 00:24:16.236 lat (msec): min=12, max=132, avg=71.43, stdev=25.50 00:24:16.236 clat percentiles (msec): 00:24:16.236 | 1.00th=[ 23], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 48], 00:24:16.236 | 30.00th=[ 57], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 74], 00:24:16.236 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 118], 00:24:16.236 | 99.00th=[ 121], 99.50th=[ 123], 99.90th=[ 128], 99.95th=[ 128], 00:24:16.236 | 99.99th=[ 132] 00:24:16.236 bw ( KiB/s): min= 640, max= 1816, per=4.27%, avg=889.60, stdev=260.34, samples=20 00:24:16.236 iops : min= 160, max= 454, avg=222.40, stdev=65.08, samples=20 00:24:16.236 lat (msec) : 20=0.45%, 50=24.11%, 100=59.33%, 250=16.12% 00:24:16.236 cpu : usr=34.59%, sys=1.82%, ctx=974, majf=0, minf=9 00:24:16.236 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.2%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:16.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.236 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.236 filename0: (groupid=0, jobs=1): err= 0: pid=98899: Tue Nov 19 16:19:21 2024 00:24:16.236 read: IOPS=231, BW=925KiB/s (947kB/s)(9300KiB/10056msec) 00:24:16.236 slat (usec): min=5, max=4025, avg=16.59, stdev=117.73 00:24:16.236 clat (usec): min=1488, max=155928, avg=69002.09, stdev=33201.73 00:24:16.236 lat (usec): min=1496, max=155936, avg=69018.68, stdev=33201.08 00:24:16.236 clat percentiles (usec): 00:24:16.236 | 1.00th=[ 1598], 5.00th=[ 3163], 10.00th=[ 21890], 20.00th=[ 40109], 00:24:16.236 | 30.00th=[ 52167], 40.00th=[ 69731], 50.00th=[ 71828], 60.00th=[ 76022], 00:24:16.236 | 70.00th=[ 84411], 80.00th=[101188], 90.00th=[110625], 95.00th=[119014], 00:24:16.236 | 99.00th=[121111], 99.50th=[128451], 99.90th=[143655], 99.95th=[143655], 00:24:16.236 | 99.99th=[156238] 00:24:16.236 bw ( KiB/s): min= 576, max= 3489, per=4.44%, avg=925.65, stdev=622.27, samples=20 00:24:16.237 iops : min= 144, max= 872, avg=231.40, stdev=155.51, samples=20 00:24:16.237 lat (msec) : 2=2.88%, 4=4.69%, 10=0.69%, 20=1.46%, 50=19.27% 00:24:16.237 lat (msec) : 100=50.92%, 250=20.09% 00:24:16.237 cpu : usr=34.71%, sys=1.99%, ctx=971, majf=0, minf=0 00:24:16.237 IO depths : 1=0.3%, 2=1.2%, 4=3.4%, 8=79.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:16.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 complete : 0=0.0%, 4=88.6%, 8=10.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.237 filename1: (groupid=0, jobs=1): err= 0: pid=98900: Tue Nov 19 16:19:21 2024 00:24:16.237 read: IOPS=217, BW=872KiB/s (893kB/s)(8784KiB/10076msec) 00:24:16.237 slat (usec): min=3, max=4248, avg=21.98, stdev=175.15 00:24:16.237 clat (msec): min=2, max=155, avg=73.17, stdev=30.77 00:24:16.237 lat (msec): min=2, max=155, avg=73.19, stdev=30.76 00:24:16.237 clat percentiles (msec): 00:24:16.237 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 29], 20.00th=[ 49], 00:24:16.237 | 30.00th=[ 66], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 79], 00:24:16.237 | 70.00th=[ 87], 80.00th=[ 103], 90.00th=[ 114], 95.00th=[ 120], 00:24:16.237 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 148], 99.95th=[ 153], 00:24:16.237 | 99.99th=[ 157] 00:24:16.237 bw ( KiB/s): min= 584, max= 2788, per=4.18%, avg=871.20, stdev=471.99, samples=20 00:24:16.237 iops : min= 146, max= 697, avg=217.80, stdev=118.00, samples=20 00:24:16.237 lat (msec) : 4=4.14%, 10=0.96%, 20=2.09%, 50=14.39%, 100=57.88% 00:24:16.237 lat (msec) : 250=20.54% 00:24:16.237 cpu : usr=46.10%, sys=3.01%, ctx=1538, majf=0, minf=1 00:24:16.237 IO depths : 1=0.3%, 2=2.8%, 4=10.0%, 8=72.3%, 16=14.6%, 32=0.0%, >=64=0.0% 00:24:16.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 complete : 0=0.0%, 4=90.0%, 8=7.8%, 16=2.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.237 filename1: (groupid=0, jobs=1): err= 0: pid=98901: Tue Nov 19 16:19:21 2024 00:24:16.237 read: IOPS=207, BW=831KiB/s (851kB/s)(8348KiB/10050msec) 00:24:16.237 slat (usec): min=5, max=4026, avg=16.60, stdev=87.96 00:24:16.237 clat (msec): min=13, max=155, avg=76.87, stdev=29.11 00:24:16.237 lat (msec): min=13, max=155, avg=76.89, stdev=29.11 00:24:16.237 clat percentiles (msec): 00:24:16.237 | 1.00th=[ 14], 5.00th=[ 27], 10.00th=[ 33], 20.00th=[ 50], 00:24:16.237 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:24:16.237 | 70.00th=[ 95], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 121], 00:24:16.237 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:24:16.237 | 99.99th=[ 157] 00:24:16.237 bw ( KiB/s): min= 528, max= 2160, per=3.97%, avg=828.10, stdev=347.10, samples=20 00:24:16.237 iops : min= 132, max= 540, avg=207.00, stdev=86.77, samples=20 00:24:16.237 lat (msec) : 20=2.20%, 50=18.45%, 100=53.91%, 250=25.44% 00:24:16.237 cpu : usr=37.66%, sys=2.16%, ctx=1150, majf=0, minf=9 00:24:16.237 IO depths : 1=0.1%, 2=1.2%, 4=4.9%, 8=77.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:16.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 complete : 0=0.0%, 4=89.2%, 8=9.7%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 issued rwts: total=2087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.237 filename1: (groupid=0, jobs=1): err= 0: pid=98902: Tue Nov 19 16:19:21 2024 00:24:16.237 read: IOPS=193, BW=772KiB/s (791kB/s)(7740KiB/10020msec) 00:24:16.237 slat (usec): min=5, max=8064, avg=29.55, stdev=331.84 00:24:16.237 clat (msec): min=20, max=163, avg=82.68, stdev=24.59 00:24:16.237 lat (msec): min=20, max=163, avg=82.70, stdev=24.59 00:24:16.237 clat percentiles (msec): 00:24:16.237 | 1.00th=[ 36], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 68], 00:24:16.237 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 85], 00:24:16.237 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 121], 00:24:16.237 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 163], 99.95th=[ 163], 00:24:16.237 | 99.99th=[ 163] 00:24:16.237 bw ( KiB/s): min= 592, max= 1424, per=3.68%, avg=767.60, stdev=182.77, samples=20 00:24:16.237 iops : min= 148, max= 356, avg=191.90, stdev=45.69, samples=20 00:24:16.237 lat (msec) : 50=12.56%, 100=63.62%, 250=23.82% 00:24:16.237 cpu : usr=35.04%, sys=2.18%, ctx=1053, majf=0, minf=9 00:24:16.237 IO depths : 1=0.2%, 2=3.1%, 4=12.0%, 8=70.0%, 16=14.6%, 32=0.0%, >=64=0.0% 00:24:16.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 complete : 0=0.0%, 4=90.9%, 8=6.5%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 issued rwts: total=1935,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.237 filename1: (groupid=0, jobs=1): err= 0: pid=98903: Tue Nov 19 16:19:21 2024 00:24:16.237 read: IOPS=219, BW=877KiB/s (898kB/s)(8808KiB/10045msec) 00:24:16.237 slat (usec): min=5, max=8033, avg=20.02, stdev=189.26 00:24:16.237 clat (msec): min=12, max=150, avg=72.81, stdev=25.50 00:24:16.237 lat (msec): min=12, max=150, avg=72.83, stdev=25.51 00:24:16.237 clat percentiles (msec): 00:24:16.237 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 48], 00:24:16.237 | 30.00th=[ 59], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:24:16.237 | 70.00th=[ 83], 80.00th=[ 97], 90.00th=[ 112], 95.00th=[ 118], 00:24:16.237 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 140], 99.95th=[ 144], 00:24:16.237 | 99.99th=[ 150] 00:24:16.237 bw ( KiB/s): min= 640, max= 1848, per=4.19%, avg=874.25, stdev=266.08, samples=20 00:24:16.237 iops : min= 160, max= 462, avg=218.55, stdev=66.52, samples=20 00:24:16.237 lat (msec) : 20=0.18%, 50=23.84%, 100=58.31%, 250=17.67% 00:24:16.237 cpu : usr=34.93%, sys=1.86%, ctx=1122, majf=0, minf=9 00:24:16.237 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=82.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:24:16.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 issued rwts: total=2202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.237 filename1: (groupid=0, jobs=1): err= 0: pid=98904: Tue Nov 19 16:19:21 2024 00:24:16.237 read: IOPS=216, BW=865KiB/s (886kB/s)(8668KiB/10018msec) 00:24:16.237 slat (usec): min=8, max=8023, avg=22.05, stdev=243.29 00:24:16.237 clat (msec): min=23, max=144, avg=73.86, stdev=23.68 00:24:16.237 lat (msec): min=23, max=144, avg=73.88, stdev=23.68 00:24:16.237 clat percentiles (msec): 00:24:16.237 | 1.00th=[ 32], 5.00th=[ 39], 10.00th=[ 47], 20.00th=[ 50], 00:24:16.237 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:24:16.237 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 110], 95.00th=[ 120], 00:24:16.237 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 144], 00:24:16.237 | 99.99th=[ 144] 00:24:16.237 bw ( KiB/s): min= 640, max= 1282, per=4.13%, avg=860.50, stdev=176.38, samples=20 00:24:16.237 iops : min= 160, max= 320, avg=215.10, stdev=44.03, samples=20 00:24:16.237 lat (msec) : 50=20.44%, 100=62.94%, 250=16.61% 00:24:16.237 cpu : usr=30.77%, sys=1.90%, ctx=857, majf=0, minf=9 00:24:16.237 IO depths : 1=0.2%, 2=1.0%, 4=3.3%, 8=80.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:24:16.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 complete : 0=0.0%, 4=87.6%, 8=11.7%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.237 issued rwts: total=2167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.237 filename1: (groupid=0, jobs=1): err= 0: pid=98905: Tue Nov 19 16:19:21 2024 00:24:16.237 read: IOPS=217, BW=868KiB/s (889kB/s)(8724KiB/10050msec) 00:24:16.237 slat (usec): min=8, max=8024, avg=21.34, stdev=242.70 00:24:16.237 clat (msec): min=11, max=152, avg=73.53, stdev=27.16 00:24:16.237 lat (msec): min=11, max=152, avg=73.55, stdev=27.17 00:24:16.237 clat percentiles (msec): 00:24:16.237 | 1.00th=[ 14], 5.00th=[ 28], 10.00th=[ 36], 20.00th=[ 48], 00:24:16.237 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 79], 00:24:16.237 | 70.00th=[ 85], 80.00th=[ 100], 90.00th=[ 112], 95.00th=[ 120], 00:24:16.237 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 146], 99.95th=[ 150], 00:24:16.237 | 99.99th=[ 153] 00:24:16.237 bw ( KiB/s): min= 584, max= 2112, per=4.17%, avg=868.10, stdev=325.51, samples=20 00:24:16.237 iops : min= 146, max= 528, avg=217.00, stdev=81.37, samples=20 00:24:16.237 lat (msec) : 20=2.15%, 50=20.95%, 100=57.08%, 250=19.81% 00:24:16.238 cpu : usr=40.22%, sys=2.12%, ctx=1184, majf=0, minf=9 00:24:16.238 IO depths : 1=0.1%, 2=0.5%, 4=1.6%, 8=81.4%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:16.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 issued rwts: total=2181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.238 filename1: (groupid=0, jobs=1): err= 0: pid=98906: Tue Nov 19 16:19:21 2024 00:24:16.238 read: IOPS=225, BW=901KiB/s (923kB/s)(9032KiB/10020msec) 00:24:16.238 slat (usec): min=3, max=8025, avg=23.81, stdev=252.86 00:24:16.238 clat (msec): min=17, max=126, avg=70.86, stdev=24.91 00:24:16.238 lat (msec): min=17, max=126, avg=70.89, stdev=24.90 00:24:16.238 clat percentiles (msec): 00:24:16.238 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 42], 20.00th=[ 48], 00:24:16.238 | 30.00th=[ 55], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:24:16.238 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 110], 95.00th=[ 117], 00:24:16.238 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 127], 99.95th=[ 127], 00:24:16.238 | 99.99th=[ 127] 00:24:16.238 bw ( KiB/s): min= 664, max= 1676, per=4.30%, avg=896.60, stdev=238.21, samples=20 00:24:16.238 iops : min= 166, max= 419, avg=224.15, stdev=59.55, samples=20 00:24:16.238 lat (msec) : 20=0.40%, 50=24.53%, 100=58.86%, 250=16.21% 00:24:16.238 cpu : usr=43.12%, sys=2.63%, ctx=1532, majf=0, minf=9 00:24:16.238 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.8%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:16.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 complete : 0=0.0%, 4=86.9%, 8=12.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 issued rwts: total=2258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.238 filename1: (groupid=0, jobs=1): err= 0: pid=98907: Tue Nov 19 16:19:21 2024 00:24:16.238 read: IOPS=209, BW=837KiB/s (857kB/s)(8380KiB/10012msec) 00:24:16.238 slat (usec): min=8, max=4025, avg=17.81, stdev=124.00 00:24:16.238 clat (msec): min=26, max=125, avg=76.35, stdev=23.47 00:24:16.238 lat (msec): min=26, max=125, avg=76.37, stdev=23.48 00:24:16.238 clat percentiles (msec): 00:24:16.238 | 1.00th=[ 31], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 56], 00:24:16.238 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 81], 00:24:16.238 | 70.00th=[ 86], 80.00th=[ 97], 90.00th=[ 111], 95.00th=[ 118], 00:24:16.238 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 126], 99.95th=[ 126], 00:24:16.238 | 99.99th=[ 126] 00:24:16.238 bw ( KiB/s): min= 656, max= 1539, per=4.00%, avg=834.05, stdev=204.02, samples=20 00:24:16.238 iops : min= 164, max= 384, avg=208.45, stdev=50.86, samples=20 00:24:16.238 lat (msec) : 50=16.95%, 100=65.49%, 250=17.57% 00:24:16.238 cpu : usr=42.57%, sys=2.63%, ctx=1443, majf=0, minf=9 00:24:16.238 IO depths : 1=0.1%, 2=2.4%, 4=9.5%, 8=73.6%, 16=14.5%, 32=0.0%, >=64=0.0% 00:24:16.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 complete : 0=0.0%, 4=89.5%, 8=8.4%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 issued rwts: total=2095,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.238 filename2: (groupid=0, jobs=1): err= 0: pid=98908: Tue Nov 19 16:19:21 2024 00:24:16.238 read: IOPS=223, BW=894KiB/s (915kB/s)(8960KiB/10027msec) 00:24:16.238 slat (usec): min=4, max=4026, avg=18.29, stdev=119.94 00:24:16.238 clat (msec): min=14, max=143, avg=71.52, stdev=25.42 00:24:16.238 lat (msec): min=14, max=143, avg=71.54, stdev=25.42 00:24:16.238 clat percentiles (msec): 00:24:16.238 | 1.00th=[ 22], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 48], 00:24:16.238 | 30.00th=[ 55], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 77], 00:24:16.238 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 111], 95.00th=[ 118], 00:24:16.238 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 125], 99.95th=[ 125], 00:24:16.238 | 99.99th=[ 144] 00:24:16.238 bw ( KiB/s): min= 648, max= 1800, per=4.27%, avg=889.60, stdev=258.96, samples=20 00:24:16.238 iops : min= 162, max= 450, avg=222.40, stdev=64.74, samples=20 00:24:16.238 lat (msec) : 20=0.76%, 50=23.35%, 100=58.93%, 250=16.96% 00:24:16.238 cpu : usr=43.01%, sys=2.70%, ctx=1379, majf=0, minf=9 00:24:16.238 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=82.9%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:16.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.238 filename2: (groupid=0, jobs=1): err= 0: pid=98909: Tue Nov 19 16:19:21 2024 00:24:16.238 read: IOPS=216, BW=867KiB/s (888kB/s)(8680KiB/10013msec) 00:24:16.238 slat (usec): min=5, max=12026, avg=38.98, stdev=462.57 00:24:16.238 clat (msec): min=18, max=129, avg=73.64, stdev=23.45 00:24:16.238 lat (msec): min=18, max=129, avg=73.67, stdev=23.45 00:24:16.238 clat percentiles (msec): 00:24:16.238 | 1.00th=[ 28], 5.00th=[ 40], 10.00th=[ 47], 20.00th=[ 50], 00:24:16.238 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 75], 00:24:16.238 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 118], 00:24:16.238 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:24:16.238 | 99.99th=[ 130] 00:24:16.238 bw ( KiB/s): min= 616, max= 1280, per=4.15%, avg=864.30, stdev=176.76, samples=20 00:24:16.238 iops : min= 154, max= 320, avg=216.05, stdev=44.20, samples=20 00:24:16.238 lat (msec) : 20=0.09%, 50=21.34%, 100=61.84%, 250=16.73% 00:24:16.238 cpu : usr=33.91%, sys=2.01%, ctx=975, majf=0, minf=9 00:24:16.238 IO depths : 1=0.2%, 2=0.6%, 4=2.1%, 8=81.7%, 16=15.4%, 32=0.0%, >=64=0.0% 00:24:16.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 issued rwts: total=2170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.238 filename2: (groupid=0, jobs=1): err= 0: pid=98910: Tue Nov 19 16:19:21 2024 00:24:16.238 read: IOPS=223, BW=896KiB/s (917kB/s)(8988KiB/10035msec) 00:24:16.238 slat (usec): min=3, max=4030, avg=24.61, stdev=189.13 00:24:16.238 clat (msec): min=13, max=154, avg=71.28, stdev=25.75 00:24:16.238 lat (msec): min=13, max=154, avg=71.31, stdev=25.75 00:24:16.238 clat percentiles (msec): 00:24:16.238 | 1.00th=[ 24], 5.00th=[ 28], 10.00th=[ 38], 20.00th=[ 48], 00:24:16.238 | 30.00th=[ 56], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:24:16.238 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 111], 95.00th=[ 118], 00:24:16.238 | 99.00th=[ 123], 99.50th=[ 124], 99.90th=[ 125], 99.95th=[ 125], 00:24:16.238 | 99.99th=[ 155] 00:24:16.238 bw ( KiB/s): min= 628, max= 1880, per=4.29%, avg=894.30, stdev=277.94, samples=20 00:24:16.238 iops : min= 157, max= 470, avg=223.55, stdev=69.51, samples=20 00:24:16.238 lat (msec) : 20=0.58%, 50=24.83%, 100=57.59%, 250=17.00% 00:24:16.238 cpu : usr=38.32%, sys=2.19%, ctx=1168, majf=0, minf=9 00:24:16.238 IO depths : 1=0.1%, 2=0.1%, 4=0.6%, 8=83.3%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:16.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.238 issued rwts: total=2247,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.238 filename2: (groupid=0, jobs=1): err= 0: pid=98911: Tue Nov 19 16:19:21 2024 00:24:16.238 read: IOPS=225, BW=902KiB/s (924kB/s)(9036KiB/10017msec) 00:24:16.238 slat (usec): min=3, max=8032, avg=23.97, stdev=252.93 00:24:16.238 clat (msec): min=12, max=125, avg=70.82, stdev=24.82 00:24:16.238 lat (msec): min=12, max=125, avg=70.84, stdev=24.81 00:24:16.238 clat percentiles (msec): 00:24:16.238 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 36], 20.00th=[ 48], 00:24:16.238 | 30.00th=[ 56], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 74], 00:24:16.238 | 70.00th=[ 81], 80.00th=[ 90], 90.00th=[ 111], 95.00th=[ 116], 00:24:16.238 | 99.00th=[ 122], 99.50th=[ 123], 99.90th=[ 127], 99.95th=[ 127], 00:24:16.238 | 99.99th=[ 127] 00:24:16.238 bw ( KiB/s): min= 664, max= 1720, per=4.31%, avg=897.05, stdev=242.96, samples=20 00:24:16.239 iops : min= 166, max= 430, avg=224.25, stdev=60.73, samples=20 00:24:16.239 lat (msec) : 20=0.18%, 50=25.32%, 100=58.26%, 250=16.25% 00:24:16.239 cpu : usr=37.98%, sys=1.87%, ctx=1140, majf=0, minf=9 00:24:16.239 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:16.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.239 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.239 issued rwts: total=2259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.239 filename2: (groupid=0, jobs=1): err= 0: pid=98912: Tue Nov 19 16:19:21 2024 00:24:16.239 read: IOPS=222, BW=891KiB/s (912kB/s)(8924KiB/10018msec) 00:24:16.239 slat (usec): min=4, max=9022, avg=33.82, stdev=388.82 00:24:16.239 clat (msec): min=12, max=121, avg=71.69, stdev=25.26 00:24:16.239 lat (msec): min=12, max=121, avg=71.72, stdev=25.24 00:24:16.239 clat percentiles (msec): 00:24:16.239 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 48], 00:24:16.239 | 30.00th=[ 59], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:24:16.239 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 109], 95.00th=[ 121], 00:24:16.239 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 123], 00:24:16.239 | 99.99th=[ 123] 00:24:16.239 bw ( KiB/s): min= 640, max= 1752, per=4.25%, avg=886.00, stdev=248.86, samples=20 00:24:16.239 iops : min= 160, max= 438, avg=221.50, stdev=62.21, samples=20 00:24:16.239 lat (msec) : 20=0.18%, 50=25.68%, 100=57.96%, 250=16.18% 00:24:16.239 cpu : usr=31.31%, sys=1.86%, ctx=881, majf=0, minf=9 00:24:16.239 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:16.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.239 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.239 issued rwts: total=2231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.239 filename2: (groupid=0, jobs=1): err= 0: pid=98913: Tue Nov 19 16:19:21 2024 00:24:16.239 read: IOPS=215, BW=860KiB/s (881kB/s)(8616KiB/10018msec) 00:24:16.239 slat (usec): min=6, max=4024, avg=16.67, stdev=86.54 00:24:16.239 clat (msec): min=22, max=151, avg=74.33, stdev=25.12 00:24:16.239 lat (msec): min=22, max=151, avg=74.35, stdev=25.12 00:24:16.239 clat percentiles (msec): 00:24:16.239 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 38], 20.00th=[ 49], 00:24:16.239 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 80], 00:24:16.239 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 112], 95.00th=[ 120], 00:24:16.239 | 99.00th=[ 124], 99.50th=[ 128], 99.90th=[ 130], 99.95th=[ 142], 00:24:16.239 | 99.99th=[ 153] 00:24:16.239 bw ( KiB/s): min= 632, max= 1664, per=4.10%, avg=855.20, stdev=233.71, samples=20 00:24:16.239 iops : min= 158, max= 416, avg=213.80, stdev=58.43, samples=20 00:24:16.239 lat (msec) : 50=21.26%, 100=59.70%, 250=19.03% 00:24:16.239 cpu : usr=36.18%, sys=2.11%, ctx=1106, majf=0, minf=9 00:24:16.239 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.2%, 16=16.3%, 32=0.0%, >=64=0.0% 00:24:16.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.239 complete : 0=0.0%, 4=87.6%, 8=12.1%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.239 issued rwts: total=2154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.239 filename2: (groupid=0, jobs=1): err= 0: pid=98914: Tue Nov 19 16:19:21 2024 00:24:16.239 read: IOPS=221, BW=885KiB/s (906kB/s)(8864KiB/10021msec) 00:24:16.239 slat (usec): min=5, max=8030, avg=20.90, stdev=212.94 00:24:16.239 clat (msec): min=15, max=144, avg=72.24, stdev=25.38 00:24:16.239 lat (msec): min=15, max=144, avg=72.26, stdev=25.37 00:24:16.239 clat percentiles (msec): 00:24:16.239 | 1.00th=[ 24], 5.00th=[ 31], 10.00th=[ 42], 20.00th=[ 49], 00:24:16.239 | 30.00th=[ 57], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 75], 00:24:16.239 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 121], 00:24:16.239 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 128], 99.95th=[ 128], 00:24:16.239 | 99.99th=[ 144] 00:24:16.239 bw ( KiB/s): min= 632, max= 1704, per=4.22%, avg=880.00, stdev=241.23, samples=20 00:24:16.239 iops : min= 158, max= 426, avg=220.00, stdev=60.31, samples=20 00:24:16.239 lat (msec) : 20=0.32%, 50=23.74%, 100=58.35%, 250=17.60% 00:24:16.239 cpu : usr=33.21%, sys=2.15%, ctx=1340, majf=0, minf=9 00:24:16.239 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.0%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:16.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.239 complete : 0=0.0%, 4=87.4%, 8=12.2%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.239 issued rwts: total=2216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.239 filename2: (groupid=0, jobs=1): err= 0: pid=98915: Tue Nov 19 16:19:21 2024 00:24:16.239 read: IOPS=218, BW=872KiB/s (893kB/s)(8768KiB/10051msec) 00:24:16.239 slat (usec): min=5, max=8025, avg=24.16, stdev=259.52 00:24:16.239 clat (msec): min=11, max=150, avg=73.18, stdev=26.78 00:24:16.239 lat (msec): min=11, max=150, avg=73.21, stdev=26.78 00:24:16.239 clat percentiles (msec): 00:24:16.239 | 1.00th=[ 22], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 48], 00:24:16.239 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:24:16.239 | 70.00th=[ 85], 80.00th=[ 99], 90.00th=[ 112], 95.00th=[ 118], 00:24:16.239 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 144], 00:24:16.239 | 99.99th=[ 150] 00:24:16.239 bw ( KiB/s): min= 616, max= 2087, per=4.17%, avg=869.70, stdev=317.14, samples=20 00:24:16.239 iops : min= 154, max= 521, avg=217.35, stdev=79.10, samples=20 00:24:16.239 lat (msec) : 20=0.78%, 50=21.85%, 100=58.26%, 250=19.11% 00:24:16.239 cpu : usr=36.99%, sys=2.17%, ctx=1117, majf=0, minf=9 00:24:16.239 IO depths : 1=0.1%, 2=0.1%, 4=0.7%, 8=82.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:24:16.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.239 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:16.239 issued rwts: total=2192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:16.239 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:16.239 00:24:16.239 Run status group 0 (all jobs): 00:24:16.239 READ: bw=20.3MiB/s (21.3MB/s), 772KiB/s-925KiB/s (791kB/s-947kB/s), io=205MiB (215MB), run=10012-10076msec 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.239 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.240 bdev_null0 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.240 [2024-11-19 16:19:21.551787] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.240 bdev_null1 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:16.240 { 00:24:16.240 "params": { 00:24:16.240 "name": "Nvme$subsystem", 00:24:16.240 "trtype": "$TEST_TRANSPORT", 00:24:16.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:16.240 "adrfam": "ipv4", 00:24:16.240 "trsvcid": "$NVMF_PORT", 00:24:16.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:16.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:16.240 "hdgst": ${hdgst:-false}, 00:24:16.240 "ddgst": ${ddgst:-false} 00:24:16.240 }, 00:24:16.240 "method": "bdev_nvme_attach_controller" 00:24:16.240 } 00:24:16.240 EOF 00:24:16.240 )") 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:16.240 16:19:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:16.240 { 00:24:16.240 "params": { 00:24:16.240 "name": "Nvme$subsystem", 00:24:16.240 "trtype": "$TEST_TRANSPORT", 00:24:16.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:16.240 "adrfam": "ipv4", 00:24:16.240 "trsvcid": "$NVMF_PORT", 00:24:16.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:16.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:16.240 "hdgst": ${hdgst:-false}, 00:24:16.241 "ddgst": ${ddgst:-false} 00:24:16.241 }, 00:24:16.241 "method": "bdev_nvme_attach_controller" 00:24:16.241 } 00:24:16.241 EOF 00:24:16.241 )") 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:16.241 "params": { 00:24:16.241 "name": "Nvme0", 00:24:16.241 "trtype": "tcp", 00:24:16.241 "traddr": "10.0.0.3", 00:24:16.241 "adrfam": "ipv4", 00:24:16.241 "trsvcid": "4420", 00:24:16.241 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.241 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:16.241 "hdgst": false, 00:24:16.241 "ddgst": false 00:24:16.241 }, 00:24:16.241 "method": "bdev_nvme_attach_controller" 00:24:16.241 },{ 00:24:16.241 "params": { 00:24:16.241 "name": "Nvme1", 00:24:16.241 "trtype": "tcp", 00:24:16.241 "traddr": "10.0.0.3", 00:24:16.241 "adrfam": "ipv4", 00:24:16.241 "trsvcid": "4420", 00:24:16.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:16.241 "hdgst": false, 00:24:16.241 "ddgst": false 00:24:16.241 }, 00:24:16.241 "method": "bdev_nvme_attach_controller" 00:24:16.241 }' 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:16.241 16:19:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:16.241 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:16.241 ... 00:24:16.241 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:16.241 ... 00:24:16.241 fio-3.35 00:24:16.241 Starting 4 threads 00:24:21.553 00:24:21.553 filename0: (groupid=0, jobs=1): err= 0: pid=99060: Tue Nov 19 16:19:27 2024 00:24:21.553 read: IOPS=2144, BW=16.8MiB/s (17.6MB/s)(83.8MiB/5002msec) 00:24:21.553 slat (usec): min=3, max=104, avg=13.35, stdev= 5.98 00:24:21.553 clat (usec): min=619, max=7991, avg=3686.85, stdev=852.19 00:24:21.553 lat (usec): min=626, max=8004, avg=3700.20, stdev=852.87 00:24:21.553 clat percentiles (usec): 00:24:21.553 | 1.00th=[ 1287], 5.00th=[ 1483], 10.00th=[ 2606], 20.00th=[ 3261], 00:24:21.553 | 30.00th=[ 3621], 40.00th=[ 3752], 50.00th=[ 3884], 60.00th=[ 3982], 00:24:21.553 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4359], 95.00th=[ 4883], 00:24:21.553 | 99.00th=[ 5342], 99.50th=[ 5800], 99.90th=[ 7177], 99.95th=[ 7898], 00:24:21.553 | 99.99th=[ 7898] 00:24:21.553 bw ( KiB/s): min=15568, max=19312, per=26.78%, avg=17333.33, stdev=1485.76, samples=9 00:24:21.553 iops : min= 1946, max= 2414, avg=2166.67, stdev=185.72, samples=9 00:24:21.553 lat (usec) : 750=0.12%, 1000=0.02% 00:24:21.553 lat (msec) : 2=6.73%, 4=54.33%, 10=38.80% 00:24:21.553 cpu : usr=90.06%, sys=8.84%, ctx=7, majf=0, minf=0 00:24:21.553 IO depths : 1=0.1%, 2=13.7%, 4=57.2%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.553 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.553 issued rwts: total=10726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.553 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.553 filename0: (groupid=0, jobs=1): err= 0: pid=99061: Tue Nov 19 16:19:27 2024 00:24:21.553 read: IOPS=1918, BW=15.0MiB/s (15.7MB/s)(75.0MiB/5002msec) 00:24:21.553 slat (nsec): min=4614, max=59508, avg=14087.89, stdev=5368.83 00:24:21.553 clat (usec): min=989, max=7590, avg=4115.41, stdev=716.03 00:24:21.553 lat (usec): min=997, max=7605, avg=4129.50, stdev=716.23 00:24:21.553 clat percentiles (usec): 00:24:21.553 | 1.00th=[ 2114], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3720], 00:24:21.553 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4015], 60.00th=[ 4080], 00:24:21.553 | 70.00th=[ 4178], 80.00th=[ 4359], 90.00th=[ 5080], 95.00th=[ 5604], 00:24:21.554 | 99.00th=[ 6521], 99.50th=[ 6587], 99.90th=[ 6783], 99.95th=[ 6915], 00:24:21.554 | 99.99th=[ 7570] 00:24:21.554 bw ( KiB/s): min=13824, max=16480, per=23.76%, avg=15377.56, stdev=961.77, samples=9 00:24:21.554 iops : min= 1728, max= 2060, avg=1922.11, stdev=120.12, samples=9 00:24:21.554 lat (usec) : 1000=0.02% 00:24:21.554 lat (msec) : 2=0.52%, 4=46.02%, 10=53.44% 00:24:21.554 cpu : usr=91.30%, sys=7.80%, ctx=7, majf=0, minf=0 00:24:21.554 IO depths : 1=0.1%, 2=21.9%, 4=52.2%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.554 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.554 issued rwts: total=9598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.554 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.554 filename1: (groupid=0, jobs=1): err= 0: pid=99062: Tue Nov 19 16:19:27 2024 00:24:21.554 read: IOPS=2013, BW=15.7MiB/s (16.5MB/s)(78.7MiB/5001msec) 00:24:21.554 slat (nsec): min=3407, max=58844, avg=14771.98, stdev=5031.75 00:24:21.554 clat (usec): min=1129, max=6941, avg=3920.94, stdev=597.21 00:24:21.554 lat (usec): min=1143, max=6954, avg=3935.71, stdev=597.35 00:24:21.554 clat percentiles (usec): 00:24:21.554 | 1.00th=[ 1647], 5.00th=[ 2769], 10.00th=[ 3359], 20.00th=[ 3621], 00:24:21.554 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4047], 00:24:21.554 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 4948], 00:24:21.554 | 99.00th=[ 5276], 99.50th=[ 5538], 99.90th=[ 5997], 99.95th=[ 6063], 00:24:21.554 | 99.99th=[ 6390] 00:24:21.554 bw ( KiB/s): min=14592, max=17264, per=24.75%, avg=16017.56, stdev=725.76, samples=9 00:24:21.554 iops : min= 1824, max= 2158, avg=2002.11, stdev=90.68, samples=9 00:24:21.554 lat (msec) : 2=1.41%, 4=51.63%, 10=46.96% 00:24:21.554 cpu : usr=91.28%, sys=7.84%, ctx=11, majf=0, minf=9 00:24:21.554 IO depths : 1=0.1%, 2=18.8%, 4=54.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.554 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.554 issued rwts: total=10071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.554 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.554 filename1: (groupid=0, jobs=1): err= 0: pid=99063: Tue Nov 19 16:19:27 2024 00:24:21.554 read: IOPS=2013, BW=15.7MiB/s (16.5MB/s)(78.7MiB/5002msec) 00:24:21.554 slat (nsec): min=4553, max=57647, avg=15289.49, stdev=5290.33 00:24:21.554 clat (usec): min=1119, max=6880, avg=3918.16, stdev=597.50 00:24:21.554 lat (usec): min=1144, max=6897, avg=3933.45, stdev=597.49 00:24:21.554 clat percentiles (usec): 00:24:21.554 | 1.00th=[ 1647], 5.00th=[ 2769], 10.00th=[ 3359], 20.00th=[ 3621], 00:24:21.554 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 3982], 60.00th=[ 4047], 00:24:21.554 | 70.00th=[ 4113], 80.00th=[ 4228], 90.00th=[ 4490], 95.00th=[ 4948], 00:24:21.554 | 99.00th=[ 5276], 99.50th=[ 5538], 99.90th=[ 5997], 99.95th=[ 6325], 00:24:21.554 | 99.99th=[ 6521] 00:24:21.554 bw ( KiB/s): min=14592, max=17264, per=24.74%, avg=16014.00, stdev=725.85, samples=9 00:24:21.554 iops : min= 1824, max= 2158, avg=2001.67, stdev=90.69, samples=9 00:24:21.554 lat (msec) : 2=1.42%, 4=52.16%, 10=46.42% 00:24:21.554 cpu : usr=90.56%, sys=8.52%, ctx=12, majf=0, minf=0 00:24:21.554 IO depths : 1=0.1%, 2=18.8%, 4=54.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.554 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.554 issued rwts: total=10071,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.554 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:21.554 00:24:21.554 Run status group 0 (all jobs): 00:24:21.554 READ: bw=63.2MiB/s (66.3MB/s), 15.0MiB/s-16.8MiB/s (15.7MB/s-17.6MB/s), io=316MiB (331MB), run=5001-5002msec 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.554 00:24:21.554 real 0m23.113s 00:24:21.554 user 2m2.560s 00:24:21.554 sys 0m8.851s 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.554 ************************************ 00:24:21.554 END TEST fio_dif_rand_params 00:24:21.554 ************************************ 00:24:21.554 16:19:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:21.554 16:19:27 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:21.554 16:19:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:21.554 16:19:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:21.554 16:19:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:21.554 ************************************ 00:24:21.554 START TEST fio_dif_digest 00:24:21.554 ************************************ 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:21.554 bdev_null0 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:21.554 [2024-11-19 16:19:27.607124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:21.554 16:19:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:21.554 { 00:24:21.554 "params": { 00:24:21.555 "name": "Nvme$subsystem", 00:24:21.555 "trtype": "$TEST_TRANSPORT", 00:24:21.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:21.555 "adrfam": "ipv4", 00:24:21.555 "trsvcid": "$NVMF_PORT", 00:24:21.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:21.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:21.555 "hdgst": ${hdgst:-false}, 00:24:21.555 "ddgst": ${ddgst:-false} 00:24:21.555 }, 00:24:21.555 "method": "bdev_nvme_attach_controller" 00:24:21.555 } 00:24:21.555 EOF 00:24:21.555 )") 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:21.555 "params": { 00:24:21.555 "name": "Nvme0", 00:24:21.555 "trtype": "tcp", 00:24:21.555 "traddr": "10.0.0.3", 00:24:21.555 "adrfam": "ipv4", 00:24:21.555 "trsvcid": "4420", 00:24:21.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:21.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:21.555 "hdgst": true, 00:24:21.555 "ddgst": true 00:24:21.555 }, 00:24:21.555 "method": "bdev_nvme_attach_controller" 00:24:21.555 }' 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:21.555 16:19:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:21.555 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:21.555 ... 00:24:21.555 fio-3.35 00:24:21.555 Starting 3 threads 00:24:33.759 00:24:33.759 filename0: (groupid=0, jobs=1): err= 0: pid=99165: Tue Nov 19 16:19:38 2024 00:24:33.759 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10009msec) 00:24:33.759 slat (nsec): min=6860, max=46129, avg=15143.30, stdev=5058.14 00:24:33.759 clat (usec): min=9433, max=14752, avg=13084.40, stdev=842.26 00:24:33.759 lat (usec): min=9447, max=14777, avg=13099.54, stdev=843.06 00:24:33.759 clat percentiles (usec): 00:24:33.759 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11863], 20.00th=[11994], 00:24:33.759 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13435], 60.00th=[13566], 00:24:33.759 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14091], 00:24:33.759 | 99.00th=[14353], 99.50th=[14353], 99.90th=[14746], 99.95th=[14746], 00:24:33.759 | 99.99th=[14746] 00:24:33.759 bw ( KiB/s): min=27648, max=32256, per=33.39%, avg=29305.26, stdev=1604.10, samples=19 00:24:33.759 iops : min= 216, max= 252, avg=228.95, stdev=12.53, samples=19 00:24:33.759 lat (msec) : 10=0.13%, 20=99.87% 00:24:33.759 cpu : usr=91.32%, sys=8.10%, ctx=28, majf=0, minf=0 00:24:33.759 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:33.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.759 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:33.759 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:33.759 filename0: (groupid=0, jobs=1): err= 0: pid=99166: Tue Nov 19 16:19:38 2024 00:24:33.759 read: IOPS=228, BW=28.6MiB/s (29.9MB/s)(286MiB/10005msec) 00:24:33.759 slat (nsec): min=6731, max=51241, avg=10466.66, stdev=4893.42 00:24:33.759 clat (usec): min=11524, max=17382, avg=13103.52, stdev=835.53 00:24:33.759 lat (usec): min=11531, max=17409, avg=13113.99, stdev=836.18 00:24:33.759 clat percentiles (usec): 00:24:33.759 | 1.00th=[11731], 5.00th=[11863], 10.00th=[11863], 20.00th=[12125], 00:24:33.759 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13435], 60.00th=[13566], 00:24:33.759 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14091], 00:24:33.759 | 99.00th=[14353], 99.50th=[14484], 99.90th=[17433], 99.95th=[17433], 00:24:33.759 | 99.99th=[17433] 00:24:33.759 bw ( KiB/s): min=27648, max=32256, per=33.39%, avg=29305.26, stdev=1583.54, samples=19 00:24:33.759 iops : min= 216, max= 252, avg=228.95, stdev=12.37, samples=19 00:24:33.759 lat (msec) : 20=100.00% 00:24:33.759 cpu : usr=90.69%, sys=8.73%, ctx=14, majf=0, minf=0 00:24:33.759 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:33.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.759 issued rwts: total=2286,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:33.759 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:33.759 filename0: (groupid=0, jobs=1): err= 0: pid=99167: Tue Nov 19 16:19:38 2024 00:24:33.759 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(286MiB/10010msec) 00:24:33.759 slat (nsec): min=6948, max=52472, avg=14504.23, stdev=5052.04 00:24:33.759 clat (usec): min=9443, max=14868, avg=13086.73, stdev=842.36 00:24:33.759 lat (usec): min=9461, max=14892, avg=13101.23, stdev=843.22 00:24:33.759 clat percentiles (usec): 00:24:33.759 | 1.00th=[11600], 5.00th=[11731], 10.00th=[11863], 20.00th=[11994], 00:24:33.759 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13435], 60.00th=[13566], 00:24:33.759 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14091], 00:24:33.759 | 99.00th=[14353], 99.50th=[14353], 99.90th=[14877], 99.95th=[14877], 00:24:33.759 | 99.99th=[14877] 00:24:33.759 bw ( KiB/s): min=27648, max=32256, per=33.39%, avg=29305.26, stdev=1604.10, samples=19 00:24:33.759 iops : min= 216, max= 252, avg=228.95, stdev=12.53, samples=19 00:24:33.759 lat (msec) : 10=0.13%, 20=99.87% 00:24:33.759 cpu : usr=91.56%, sys=7.88%, ctx=11, majf=0, minf=9 00:24:33.759 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:33.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:33.759 issued rwts: total=2289,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:33.759 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:33.759 00:24:33.759 Run status group 0 (all jobs): 00:24:33.759 READ: bw=85.7MiB/s (89.9MB/s), 28.6MiB/s-28.6MiB/s (29.9MB/s-30.0MB/s), io=858MiB (900MB), run=10005-10010msec 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.759 00:24:33.759 real 0m10.916s 00:24:33.759 user 0m27.971s 00:24:33.759 sys 0m2.723s 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:33.759 ************************************ 00:24:33.759 END TEST fio_dif_digest 00:24:33.759 ************************************ 00:24:33.759 16:19:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:33.759 16:19:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:33.759 16:19:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:33.759 16:19:38 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.759 16:19:38 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:24:33.759 16:19:38 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.759 16:19:38 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:24:33.760 16:19:38 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.760 16:19:38 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.760 rmmod nvme_tcp 00:24:33.760 rmmod nvme_fabrics 00:24:33.760 rmmod nvme_keyring 00:24:33.760 16:19:38 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.760 16:19:38 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:24:33.760 16:19:38 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:24:33.760 16:19:38 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 98425 ']' 00:24:33.760 16:19:38 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 98425 00:24:33.760 16:19:38 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 98425 ']' 00:24:33.760 16:19:38 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 98425 00:24:33.760 16:19:38 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:24:33.760 16:19:38 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.760 16:19:38 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 98425 00:24:33.760 16:19:38 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.760 16:19:38 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.760 killing process with pid 98425 00:24:33.760 16:19:38 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 98425' 00:24:33.760 16:19:38 nvmf_dif -- common/autotest_common.sh@973 -- # kill 98425 00:24:33.760 16:19:38 nvmf_dif -- common/autotest_common.sh@978 -- # wait 98425 00:24:33.760 16:19:38 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:24:33.760 16:19:38 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:33.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:33.760 Waiting for block devices as requested 00:24:33.760 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:33.760 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.760 16:19:39 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:33.760 16:19:39 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.760 16:19:39 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:24:33.760 00:24:33.760 real 0m58.599s 00:24:33.760 user 3m45.041s 00:24:33.760 sys 0m20.160s 00:24:33.760 16:19:39 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:33.760 16:19:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:33.760 ************************************ 00:24:33.760 END TEST nvmf_dif 00:24:33.760 ************************************ 00:24:33.760 16:19:39 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:33.760 16:19:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:33.760 16:19:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:33.760 16:19:39 -- common/autotest_common.sh@10 -- # set +x 00:24:33.760 ************************************ 00:24:33.760 START TEST nvmf_abort_qd_sizes 00:24:33.760 ************************************ 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:33.760 * Looking for test storage... 00:24:33.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:33.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.760 --rc genhtml_branch_coverage=1 00:24:33.760 --rc genhtml_function_coverage=1 00:24:33.760 --rc genhtml_legend=1 00:24:33.760 --rc geninfo_all_blocks=1 00:24:33.760 --rc geninfo_unexecuted_blocks=1 00:24:33.760 00:24:33.760 ' 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:33.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.760 --rc genhtml_branch_coverage=1 00:24:33.760 --rc genhtml_function_coverage=1 00:24:33.760 --rc genhtml_legend=1 00:24:33.760 --rc geninfo_all_blocks=1 00:24:33.760 --rc geninfo_unexecuted_blocks=1 00:24:33.760 00:24:33.760 ' 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:33.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.760 --rc genhtml_branch_coverage=1 00:24:33.760 --rc genhtml_function_coverage=1 00:24:33.760 --rc genhtml_legend=1 00:24:33.760 --rc geninfo_all_blocks=1 00:24:33.760 --rc geninfo_unexecuted_blocks=1 00:24:33.760 00:24:33.760 ' 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:33.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.760 --rc genhtml_branch_coverage=1 00:24:33.760 --rc genhtml_function_coverage=1 00:24:33.760 --rc genhtml_legend=1 00:24:33.760 --rc geninfo_all_blocks=1 00:24:33.760 --rc geninfo_unexecuted_blocks=1 00:24:33.760 00:24:33.760 ' 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.760 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:33.761 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:33.761 Cannot find device "nvmf_init_br" 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:33.761 Cannot find device "nvmf_init_br2" 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:33.761 Cannot find device "nvmf_tgt_br" 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:33.761 Cannot find device "nvmf_tgt_br2" 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:33.761 Cannot find device "nvmf_init_br" 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:33.761 Cannot find device "nvmf_init_br2" 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:33.761 Cannot find device "nvmf_tgt_br" 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:33.761 Cannot find device "nvmf_tgt_br2" 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:33.761 Cannot find device "nvmf_br" 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:33.761 Cannot find device "nvmf_init_if" 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:33.761 Cannot find device "nvmf_init_if2" 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:33.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:33.761 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:33.761 16:19:39 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:33.761 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:33.762 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:33.762 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.079 ms 00:24:33.762 00:24:33.762 --- 10.0.0.3 ping statistics --- 00:24:33.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.762 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:33.762 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:33.762 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.105 ms 00:24:33.762 00:24:33.762 --- 10.0.0.4 ping statistics --- 00:24:33.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.762 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:33.762 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.762 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:24:33.762 00:24:33.762 --- 10.0.0.1 ping statistics --- 00:24:33.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.762 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:33.762 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.762 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:24:33.762 00:24:33.762 --- 10.0.0.2 ping statistics --- 00:24:33.762 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.762 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:24:33.762 16:19:40 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:34.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:34.331 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:34.331 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:34.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=99817 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 99817 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 99817 ']' 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.590 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:34.590 [2024-11-19 16:19:41.174035] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:24:34.590 [2024-11-19 16:19:41.174490] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.849 [2024-11-19 16:19:41.332675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.849 [2024-11-19 16:19:41.367363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.849 [2024-11-19 16:19:41.367719] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.849 [2024-11-19 16:19:41.367959] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.849 [2024-11-19 16:19:41.368256] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.849 [2024-11-19 16:19:41.368497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.849 [2024-11-19 16:19:41.369747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.849 [2024-11-19 16:19:41.370387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.849 [2024-11-19 16:19:41.370518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.849 [2024-11-19 16:19:41.370538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.849 [2024-11-19 16:19:41.417287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:34.849 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:34.850 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:35.109 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:35.109 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:35.109 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:24:35.109 16:19:41 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:35.109 16:19:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:35.109 16:19:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:35.109 16:19:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:35.109 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:35.109 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:35.109 16:19:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:35.109 ************************************ 00:24:35.109 START TEST spdk_target_abort 00:24:35.109 ************************************ 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:35.109 spdk_targetn1 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:35.109 [2024-11-19 16:19:41.648048] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:35.109 [2024-11-19 16:19:41.688627] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:35.109 16:19:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:38.396 Initializing NVMe Controllers 00:24:38.396 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:38.396 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:38.396 Initialization complete. Launching workers. 00:24:38.396 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10714, failed: 0 00:24:38.396 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1050, failed to submit 9664 00:24:38.396 success 781, unsuccessful 269, failed 0 00:24:38.396 16:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:38.396 16:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:41.683 Initializing NVMe Controllers 00:24:41.683 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:41.683 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:41.683 Initialization complete. Launching workers. 00:24:41.683 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9001, failed: 0 00:24:41.683 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1154, failed to submit 7847 00:24:41.683 success 400, unsuccessful 754, failed 0 00:24:41.683 16:19:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:41.683 16:19:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:44.980 Initializing NVMe Controllers 00:24:44.980 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:44.980 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:44.980 Initialization complete. Launching workers. 00:24:44.980 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30744, failed: 0 00:24:44.980 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2346, failed to submit 28398 00:24:44.980 success 429, unsuccessful 1917, failed 0 00:24:44.980 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:44.980 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.980 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:44.980 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.980 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:44.980 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.980 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 99817 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 99817 ']' 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 99817 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99817 00:24:45.240 killing process with pid 99817 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99817' 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 99817 00:24:45.240 16:19:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 99817 00:24:45.499 ************************************ 00:24:45.499 END TEST spdk_target_abort 00:24:45.499 ************************************ 00:24:45.499 00:24:45.499 real 0m10.432s 00:24:45.499 user 0m39.973s 00:24:45.499 sys 0m2.170s 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:45.499 16:19:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:45.499 16:19:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:45.499 16:19:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.499 16:19:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:45.499 ************************************ 00:24:45.499 START TEST kernel_target_abort 00:24:45.499 ************************************ 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:45.499 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:45.757 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:45.757 Waiting for block devices as requested 00:24:46.016 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:46.016 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:46.016 No valid GPT data, bailing 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:46.016 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:46.275 No valid GPT data, bailing 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:46.275 No valid GPT data, bailing 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:46.275 No valid GPT data, bailing 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:46.275 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 --hostid=92a6f107-e459-4aaa-bfee-246c0e15cbd1 -a 10.0.0.1 -t tcp -s 4420 00:24:46.275 00:24:46.275 Discovery Log Number of Records 2, Generation counter 2 00:24:46.275 =====Discovery Log Entry 0====== 00:24:46.275 trtype: tcp 00:24:46.275 adrfam: ipv4 00:24:46.275 subtype: current discovery subsystem 00:24:46.275 treq: not specified, sq flow control disable supported 00:24:46.275 portid: 1 00:24:46.275 trsvcid: 4420 00:24:46.275 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:46.275 traddr: 10.0.0.1 00:24:46.275 eflags: none 00:24:46.275 sectype: none 00:24:46.275 =====Discovery Log Entry 1====== 00:24:46.275 trtype: tcp 00:24:46.275 adrfam: ipv4 00:24:46.275 subtype: nvme subsystem 00:24:46.275 treq: not specified, sq flow control disable supported 00:24:46.275 portid: 1 00:24:46.275 trsvcid: 4420 00:24:46.275 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:46.275 traddr: 10.0.0.1 00:24:46.275 eflags: none 00:24:46.275 sectype: none 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:46.276 16:19:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:49.587 Initializing NVMe Controllers 00:24:49.587 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:49.587 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:49.587 Initialization complete. Launching workers. 00:24:49.587 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32427, failed: 0 00:24:49.587 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32427, failed to submit 0 00:24:49.587 success 0, unsuccessful 32427, failed 0 00:24:49.587 16:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:49.587 16:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:52.873 Initializing NVMe Controllers 00:24:52.873 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:52.873 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:52.873 Initialization complete. Launching workers. 00:24:52.873 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 63390, failed: 0 00:24:52.873 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25371, failed to submit 38019 00:24:52.873 success 0, unsuccessful 25371, failed 0 00:24:52.873 16:19:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:52.873 16:19:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:56.162 Initializing NVMe Controllers 00:24:56.162 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:56.162 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:56.162 Initialization complete. Launching workers. 00:24:56.162 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 68578, failed: 0 00:24:56.162 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17102, failed to submit 51476 00:24:56.162 success 0, unsuccessful 17102, failed 0 00:24:56.162 16:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:56.162 16:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:56.162 16:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:24:56.162 16:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:56.162 16:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:56.162 16:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:56.162 16:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:56.162 16:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:56.162 16:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:56.162 16:20:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:56.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:57.298 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:57.298 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:57.298 00:24:57.298 real 0m11.890s 00:24:57.298 user 0m5.894s 00:24:57.298 sys 0m3.361s 00:24:57.298 ************************************ 00:24:57.298 END TEST kernel_target_abort 00:24:57.298 ************************************ 00:24:57.298 16:20:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.298 16:20:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:57.298 16:20:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:57.298 16:20:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:24:57.298 16:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:57.298 16:20:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.557 rmmod nvme_tcp 00:24:57.557 rmmod nvme_fabrics 00:24:57.557 rmmod nvme_keyring 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 99817 ']' 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 99817 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 99817 ']' 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 99817 00:24:57.557 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (99817) - No such process 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 99817 is not found' 00:24:57.557 Process with pid 99817 is not found 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:24:57.557 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:57.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:57.816 Waiting for block devices as requested 00:24:57.816 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:58.075 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:58.075 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:58.340 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:58.340 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:58.340 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:58.340 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:58.340 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:58.340 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.340 16:20:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:58.340 16:20:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.340 16:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:24:58.340 00:24:58.340 real 0m25.345s 00:24:58.340 user 0m47.063s 00:24:58.340 sys 0m6.984s 00:24:58.340 16:20:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.340 16:20:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:58.340 ************************************ 00:24:58.340 END TEST nvmf_abort_qd_sizes 00:24:58.340 ************************************ 00:24:58.340 16:20:05 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:58.340 16:20:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:58.341 16:20:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.341 16:20:05 -- common/autotest_common.sh@10 -- # set +x 00:24:58.341 ************************************ 00:24:58.341 START TEST keyring_file 00:24:58.341 ************************************ 00:24:58.341 16:20:05 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:24:58.601 * Looking for test storage... 00:24:58.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:24:58.601 16:20:05 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:58.601 16:20:05 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:58.601 16:20:05 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:24:58.601 16:20:05 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@345 -- # : 1 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@353 -- # local d=1 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@355 -- # echo 1 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@353 -- # local d=2 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@355 -- # echo 2 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:58.601 16:20:05 keyring_file -- scripts/common.sh@368 -- # return 0 00:24:58.601 16:20:05 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.601 16:20:05 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:58.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.601 --rc genhtml_branch_coverage=1 00:24:58.601 --rc genhtml_function_coverage=1 00:24:58.601 --rc genhtml_legend=1 00:24:58.601 --rc geninfo_all_blocks=1 00:24:58.601 --rc geninfo_unexecuted_blocks=1 00:24:58.601 00:24:58.601 ' 00:24:58.601 16:20:05 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:58.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.601 --rc genhtml_branch_coverage=1 00:24:58.601 --rc genhtml_function_coverage=1 00:24:58.601 --rc genhtml_legend=1 00:24:58.601 --rc geninfo_all_blocks=1 00:24:58.601 --rc geninfo_unexecuted_blocks=1 00:24:58.601 00:24:58.601 ' 00:24:58.601 16:20:05 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:58.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.601 --rc genhtml_branch_coverage=1 00:24:58.601 --rc genhtml_function_coverage=1 00:24:58.601 --rc genhtml_legend=1 00:24:58.601 --rc geninfo_all_blocks=1 00:24:58.601 --rc geninfo_unexecuted_blocks=1 00:24:58.601 00:24:58.601 ' 00:24:58.601 16:20:05 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:58.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.601 --rc genhtml_branch_coverage=1 00:24:58.601 --rc genhtml_function_coverage=1 00:24:58.601 --rc genhtml_legend=1 00:24:58.601 --rc geninfo_all_blocks=1 00:24:58.601 --rc geninfo_unexecuted_blocks=1 00:24:58.601 00:24:58.601 ' 00:24:58.601 16:20:05 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:58.602 16:20:05 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:24:58.602 16:20:05 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.602 16:20:05 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.602 16:20:05 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.602 16:20:05 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.602 16:20:05 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.602 16:20:05 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.602 16:20:05 keyring_file -- paths/export.sh@5 -- # export PATH 00:24:58.602 16:20:05 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@51 -- # : 0 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:58.602 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:24:58.602 16:20:05 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:24:58.602 16:20:05 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:24:58.602 16:20:05 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:24:58.602 16:20:05 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:24:58.602 16:20:05 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:24:58.602 16:20:05 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Y19ISL5RVP 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Y19ISL5RVP 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Y19ISL5RVP 00:24:58.602 16:20:05 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.Y19ISL5RVP 00:24:58.602 16:20:05 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@17 -- # name=key1 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.RraNB4aCri 00:24:58.602 16:20:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:24:58.602 16:20:05 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:24:58.862 16:20:05 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:58.862 16:20:05 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:24:58.862 16:20:05 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:24:58.862 16:20:05 keyring_file -- nvmf/common.sh@733 -- # python - 00:24:58.862 16:20:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.RraNB4aCri 00:24:58.862 16:20:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.RraNB4aCri 00:24:58.862 16:20:05 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.RraNB4aCri 00:24:58.862 16:20:05 keyring_file -- keyring/file.sh@30 -- # tgtpid=100715 00:24:58.862 16:20:05 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:58.862 16:20:05 keyring_file -- keyring/file.sh@32 -- # waitforlisten 100715 00:24:58.862 16:20:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 100715 ']' 00:24:58.862 16:20:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.862 16:20:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.862 16:20:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.862 16:20:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.862 16:20:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:58.862 [2024-11-19 16:20:05.458337] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:24:58.862 [2024-11-19 16:20:05.458440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100715 ] 00:24:59.121 [2024-11-19 16:20:05.612590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.121 [2024-11-19 16:20:05.636216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.121 [2024-11-19 16:20:05.677342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:59.121 16:20:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.121 16:20:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:59.121 16:20:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:24:59.121 16:20:05 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.121 16:20:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:59.121 [2024-11-19 16:20:05.818222] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.380 null0 00:24:59.380 [2024-11-19 16:20:05.850197] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:59.380 [2024-11-19 16:20:05.850410] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:59.380 16:20:05 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.381 16:20:05 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:59.381 [2024-11-19 16:20:05.882182] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:24:59.381 request: 00:24:59.381 { 00:24:59.381 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:24:59.381 "secure_channel": false, 00:24:59.381 "listen_address": { 00:24:59.381 "trtype": "tcp", 00:24:59.381 "traddr": "127.0.0.1", 00:24:59.381 "trsvcid": "4420" 00:24:59.381 }, 00:24:59.381 "method": "nvmf_subsystem_add_listener", 00:24:59.381 "req_id": 1 00:24:59.381 } 00:24:59.381 Got JSON-RPC error response 00:24:59.381 response: 00:24:59.381 { 00:24:59.381 "code": -32602, 00:24:59.381 "message": "Invalid parameters" 00:24:59.381 } 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:59.381 16:20:05 keyring_file -- keyring/file.sh@47 -- # bperfpid=100720 00:24:59.381 16:20:05 keyring_file -- keyring/file.sh@49 -- # waitforlisten 100720 /var/tmp/bperf.sock 00:24:59.381 16:20:05 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 100720 ']' 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.381 16:20:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:24:59.381 [2024-11-19 16:20:05.956601] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:24:59.381 [2024-11-19 16:20:05.956700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100720 ] 00:24:59.640 [2024-11-19 16:20:06.105103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.640 [2024-11-19 16:20:06.124779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.640 [2024-11-19 16:20:06.153196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:59.640 16:20:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.640 16:20:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:24:59.640 16:20:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y19ISL5RVP 00:24:59.640 16:20:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Y19ISL5RVP 00:24:59.898 16:20:06 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RraNB4aCri 00:24:59.898 16:20:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RraNB4aCri 00:25:00.158 16:20:06 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:25:00.158 16:20:06 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:00.158 16:20:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.158 16:20:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.158 16:20:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:00.416 16:20:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.Y19ISL5RVP == \/\t\m\p\/\t\m\p\.\Y\1\9\I\S\L\5\R\V\P ]] 00:25:00.416 16:20:07 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:25:00.416 16:20:07 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:25:00.416 16:20:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.416 16:20:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:00.416 16:20:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:00.675 16:20:07 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.RraNB4aCri == \/\t\m\p\/\t\m\p\.\R\r\a\N\B\4\a\C\r\i ]] 00:25:00.675 16:20:07 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:25:00.675 16:20:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:00.675 16:20:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:00.675 16:20:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:00.675 16:20:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:00.675 16:20:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.243 16:20:07 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:01.243 16:20:07 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:25:01.243 16:20:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:01.243 16:20:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.243 16:20:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.243 16:20:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.243 16:20:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:01.243 16:20:07 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:25:01.243 16:20:07 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:01.243 16:20:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:01.502 [2024-11-19 16:20:08.129628] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:01.502 nvme0n1 00:25:01.761 16:20:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:25:01.761 16:20:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:01.761 16:20:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:01.761 16:20:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:01.761 16:20:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:01.761 16:20:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:02.019 16:20:08 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:25:02.019 16:20:08 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:25:02.019 16:20:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:02.019 16:20:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:02.019 16:20:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.019 16:20:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:02.019 16:20:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:02.278 16:20:08 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:25:02.278 16:20:08 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:02.278 Running I/O for 1 seconds... 00:25:03.214 12059.00 IOPS, 47.11 MiB/s 00:25:03.214 Latency(us) 00:25:03.214 [2024-11-19T16:20:09.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.214 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:03.214 nvme0n1 : 1.01 12111.04 47.31 0.00 0.00 10541.89 4200.26 21090.68 00:25:03.214 [2024-11-19T16:20:09.929Z] =================================================================================================================== 00:25:03.214 [2024-11-19T16:20:09.929Z] Total : 12111.04 47.31 0.00 0.00 10541.89 4200.26 21090.68 00:25:03.214 { 00:25:03.214 "results": [ 00:25:03.214 { 00:25:03.214 "job": "nvme0n1", 00:25:03.214 "core_mask": "0x2", 00:25:03.214 "workload": "randrw", 00:25:03.214 "percentage": 50, 00:25:03.214 "status": "finished", 00:25:03.214 "queue_depth": 128, 00:25:03.214 "io_size": 4096, 00:25:03.214 "runtime": 1.006437, 00:25:03.214 "iops": 12111.041227617825, 00:25:03.214 "mibps": 47.30875479538213, 00:25:03.214 "io_failed": 0, 00:25:03.214 "io_timeout": 0, 00:25:03.214 "avg_latency_us": 10541.89345922926, 00:25:03.214 "min_latency_us": 4200.261818181818, 00:25:03.214 "max_latency_us": 21090.676363636365 00:25:03.214 } 00:25:03.214 ], 00:25:03.214 "core_count": 1 00:25:03.214 } 00:25:03.214 16:20:09 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:03.214 16:20:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:03.473 16:20:10 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:25:03.473 16:20:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:03.473 16:20:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.473 16:20:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.473 16:20:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.473 16:20:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:03.731 16:20:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:03.731 16:20:10 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:25:03.731 16:20:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:03.731 16:20:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.731 16:20:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:03.731 16:20:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.731 16:20:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.990 16:20:10 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:25:03.990 16:20:10 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.990 16:20:10 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:03.990 16:20:10 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.990 16:20:10 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:03.990 16:20:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:03.990 16:20:10 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:03.990 16:20:10 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:03.990 16:20:10 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:03.990 16:20:10 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:04.559 [2024-11-19 16:20:10.995005] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:04.559 [2024-11-19 16:20:10.995133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ea390 (107): Transport endpoint is not connected 00:25:04.559 [2024-11-19 16:20:10.996110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ea390 (9): Bad file descriptor 00:25:04.559 [2024-11-19 16:20:10.997107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:25:04.559 [2024-11-19 16:20:10.997138] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:04.559 [2024-11-19 16:20:10.997162] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:04.559 [2024-11-19 16:20:10.997189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:25:04.559 request: 00:25:04.559 { 00:25:04.559 "name": "nvme0", 00:25:04.559 "trtype": "tcp", 00:25:04.559 "traddr": "127.0.0.1", 00:25:04.559 "adrfam": "ipv4", 00:25:04.559 "trsvcid": "4420", 00:25:04.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.559 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:04.559 "prchk_reftag": false, 00:25:04.559 "prchk_guard": false, 00:25:04.559 "hdgst": false, 00:25:04.559 "ddgst": false, 00:25:04.559 "psk": "key1", 00:25:04.559 "allow_unrecognized_csi": false, 00:25:04.559 "method": "bdev_nvme_attach_controller", 00:25:04.559 "req_id": 1 00:25:04.559 } 00:25:04.559 Got JSON-RPC error response 00:25:04.559 response: 00:25:04.559 { 00:25:04.559 "code": -5, 00:25:04.559 "message": "Input/output error" 00:25:04.559 } 00:25:04.559 16:20:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:04.559 16:20:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:04.559 16:20:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:04.559 16:20:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:04.559 16:20:11 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:25:04.559 16:20:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:04.559 16:20:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:04.559 16:20:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:04.559 16:20:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:04.559 16:20:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.819 16:20:11 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:04.819 16:20:11 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:25:04.819 16:20:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:04.819 16:20:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:04.819 16:20:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:04.819 16:20:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.819 16:20:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:05.077 16:20:11 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:25:05.078 16:20:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:25:05.078 16:20:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:05.336 16:20:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:25:05.336 16:20:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:05.595 16:20:12 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:25:05.595 16:20:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:05.595 16:20:12 keyring_file -- keyring/file.sh@78 -- # jq length 00:25:05.854 16:20:12 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:25:05.854 16:20:12 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.Y19ISL5RVP 00:25:05.854 16:20:12 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y19ISL5RVP 00:25:05.854 16:20:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:05.854 16:20:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y19ISL5RVP 00:25:05.854 16:20:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:05.854 16:20:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:05.854 16:20:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:05.854 16:20:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:05.854 16:20:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y19ISL5RVP 00:25:05.854 16:20:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Y19ISL5RVP 00:25:06.113 [2024-11-19 16:20:12.738402] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Y19ISL5RVP': 0100660 00:25:06.113 [2024-11-19 16:20:12.738738] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:06.113 request: 00:25:06.113 { 00:25:06.113 "name": "key0", 00:25:06.113 "path": "/tmp/tmp.Y19ISL5RVP", 00:25:06.113 "method": "keyring_file_add_key", 00:25:06.113 "req_id": 1 00:25:06.113 } 00:25:06.113 Got JSON-RPC error response 00:25:06.113 response: 00:25:06.113 { 00:25:06.113 "code": -1, 00:25:06.113 "message": "Operation not permitted" 00:25:06.113 } 00:25:06.113 16:20:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:06.113 16:20:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:06.113 16:20:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:06.113 16:20:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:06.113 16:20:12 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.Y19ISL5RVP 00:25:06.113 16:20:12 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Y19ISL5RVP 00:25:06.113 16:20:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Y19ISL5RVP 00:25:06.680 16:20:13 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.Y19ISL5RVP 00:25:06.680 16:20:13 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:06.680 16:20:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.680 16:20:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.680 16:20:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.680 16:20:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.680 16:20:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.940 16:20:13 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:06.940 16:20:13 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.940 16:20:13 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:25:06.940 16:20:13 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.940 16:20:13 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:06.940 16:20:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.940 16:20:13 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:06.940 16:20:13 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.940 16:20:13 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.940 16:20:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:06.940 [2024-11-19 16:20:13.642666] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.Y19ISL5RVP': No such file or directory 00:25:06.940 [2024-11-19 16:20:13.642729] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:06.940 [2024-11-19 16:20:13.642752] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:06.940 [2024-11-19 16:20:13.642774] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:06.940 [2024-11-19 16:20:13.642784] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:06.940 [2024-11-19 16:20:13.642792] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:06.940 request: 00:25:06.940 { 00:25:06.940 "name": "nvme0", 00:25:06.940 "trtype": "tcp", 00:25:06.940 "traddr": "127.0.0.1", 00:25:06.940 "adrfam": "ipv4", 00:25:06.940 "trsvcid": "4420", 00:25:06.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:06.940 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:06.940 "prchk_reftag": false, 00:25:06.940 "prchk_guard": false, 00:25:06.940 "hdgst": false, 00:25:06.940 "ddgst": false, 00:25:06.940 "psk": "key0", 00:25:06.940 "allow_unrecognized_csi": false, 00:25:06.940 "method": "bdev_nvme_attach_controller", 00:25:06.940 "req_id": 1 00:25:06.940 } 00:25:06.940 Got JSON-RPC error response 00:25:06.940 response: 00:25:06.940 { 00:25:06.940 "code": -19, 00:25:06.940 "message": "No such device" 00:25:06.940 } 00:25:07.199 16:20:13 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:25:07.199 16:20:13 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.199 16:20:13 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.199 16:20:13 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.199 16:20:13 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:07.199 16:20:13 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:07.458 16:20:13 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:07.458 16:20:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:07.458 16:20:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:07.458 16:20:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:07.458 16:20:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:07.458 16:20:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:07.458 16:20:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LX933iacAM 00:25:07.458 16:20:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:07.458 16:20:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:07.458 16:20:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:25:07.458 16:20:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:07.458 16:20:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:07.458 16:20:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:25:07.458 16:20:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:25:07.458 16:20:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LX933iacAM 00:25:07.458 16:20:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LX933iacAM 00:25:07.458 16:20:14 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.LX933iacAM 00:25:07.458 16:20:14 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LX933iacAM 00:25:07.458 16:20:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LX933iacAM 00:25:07.717 16:20:14 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:07.717 16:20:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:07.976 nvme0n1 00:25:07.976 16:20:14 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:07.976 16:20:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:07.976 16:20:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:07.976 16:20:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.976 16:20:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.976 16:20:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:08.235 16:20:14 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:08.235 16:20:14 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:08.235 16:20:14 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:08.493 16:20:15 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:08.493 16:20:15 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:08.493 16:20:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:08.493 16:20:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:08.493 16:20:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:09.061 16:20:15 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:09.061 16:20:15 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:09.061 16:20:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:09.061 16:20:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:09.061 16:20:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:09.061 16:20:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.061 16:20:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:09.319 16:20:15 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:09.319 16:20:15 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:09.320 16:20:15 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:09.578 16:20:16 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:09.578 16:20:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:09.578 16:20:16 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:09.836 16:20:16 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:09.836 16:20:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LX933iacAM 00:25:09.836 16:20:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LX933iacAM 00:25:10.095 16:20:16 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.RraNB4aCri 00:25:10.095 16:20:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.RraNB4aCri 00:25:10.354 16:20:16 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:10.354 16:20:16 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:10.613 nvme0n1 00:25:10.613 16:20:17 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:10.613 16:20:17 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:10.873 16:20:17 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:10.873 "subsystems": [ 00:25:10.873 { 00:25:10.873 "subsystem": "keyring", 00:25:10.873 "config": [ 00:25:10.873 { 00:25:10.873 "method": "keyring_file_add_key", 00:25:10.873 "params": { 00:25:10.873 "name": "key0", 00:25:10.873 "path": "/tmp/tmp.LX933iacAM" 00:25:10.873 } 00:25:10.873 }, 00:25:10.873 { 00:25:10.873 "method": "keyring_file_add_key", 00:25:10.873 "params": { 00:25:10.873 "name": "key1", 00:25:10.873 "path": "/tmp/tmp.RraNB4aCri" 00:25:10.873 } 00:25:10.873 } 00:25:10.873 ] 00:25:10.873 }, 00:25:10.873 { 00:25:10.873 "subsystem": "iobuf", 00:25:10.873 "config": [ 00:25:10.873 { 00:25:10.873 "method": "iobuf_set_options", 00:25:10.873 "params": { 00:25:10.873 "small_pool_count": 8192, 00:25:10.873 "large_pool_count": 1024, 00:25:10.873 "small_bufsize": 8192, 00:25:10.873 "large_bufsize": 135168, 00:25:10.873 "enable_numa": false 00:25:10.873 } 00:25:10.873 } 00:25:10.873 ] 00:25:10.873 }, 00:25:10.873 { 00:25:10.873 "subsystem": "sock", 00:25:10.873 "config": [ 00:25:10.873 { 00:25:10.873 "method": "sock_set_default_impl", 00:25:10.873 "params": { 00:25:10.874 "impl_name": "uring" 00:25:10.874 } 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "method": "sock_impl_set_options", 00:25:10.874 "params": { 00:25:10.874 "impl_name": "ssl", 00:25:10.874 "recv_buf_size": 4096, 00:25:10.874 "send_buf_size": 4096, 00:25:10.874 "enable_recv_pipe": true, 00:25:10.874 "enable_quickack": false, 00:25:10.874 "enable_placement_id": 0, 00:25:10.874 "enable_zerocopy_send_server": true, 00:25:10.874 "enable_zerocopy_send_client": false, 00:25:10.874 "zerocopy_threshold": 0, 00:25:10.874 "tls_version": 0, 00:25:10.874 "enable_ktls": false 00:25:10.874 } 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "method": "sock_impl_set_options", 00:25:10.874 "params": { 00:25:10.874 "impl_name": "posix", 00:25:10.874 "recv_buf_size": 2097152, 00:25:10.874 "send_buf_size": 2097152, 00:25:10.874 "enable_recv_pipe": true, 00:25:10.874 "enable_quickack": false, 00:25:10.874 "enable_placement_id": 0, 00:25:10.874 "enable_zerocopy_send_server": true, 00:25:10.874 "enable_zerocopy_send_client": false, 00:25:10.874 "zerocopy_threshold": 0, 00:25:10.874 "tls_version": 0, 00:25:10.874 "enable_ktls": false 00:25:10.874 } 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "method": "sock_impl_set_options", 00:25:10.874 "params": { 00:25:10.874 "impl_name": "uring", 00:25:10.874 "recv_buf_size": 2097152, 00:25:10.874 "send_buf_size": 2097152, 00:25:10.874 "enable_recv_pipe": true, 00:25:10.874 "enable_quickack": false, 00:25:10.874 "enable_placement_id": 0, 00:25:10.874 "enable_zerocopy_send_server": false, 00:25:10.874 "enable_zerocopy_send_client": false, 00:25:10.874 "zerocopy_threshold": 0, 00:25:10.874 "tls_version": 0, 00:25:10.874 "enable_ktls": false 00:25:10.874 } 00:25:10.874 } 00:25:10.874 ] 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "subsystem": "vmd", 00:25:10.874 "config": [] 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "subsystem": "accel", 00:25:10.874 "config": [ 00:25:10.874 { 00:25:10.874 "method": "accel_set_options", 00:25:10.874 "params": { 00:25:10.874 "small_cache_size": 128, 00:25:10.874 "large_cache_size": 16, 00:25:10.874 "task_count": 2048, 00:25:10.874 "sequence_count": 2048, 00:25:10.874 "buf_count": 2048 00:25:10.874 } 00:25:10.874 } 00:25:10.874 ] 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "subsystem": "bdev", 00:25:10.874 "config": [ 00:25:10.874 { 00:25:10.874 "method": "bdev_set_options", 00:25:10.874 "params": { 00:25:10.874 "bdev_io_pool_size": 65535, 00:25:10.874 "bdev_io_cache_size": 256, 00:25:10.874 "bdev_auto_examine": true, 00:25:10.874 "iobuf_small_cache_size": 128, 00:25:10.874 "iobuf_large_cache_size": 16 00:25:10.874 } 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "method": "bdev_raid_set_options", 00:25:10.874 "params": { 00:25:10.874 "process_window_size_kb": 1024, 00:25:10.874 "process_max_bandwidth_mb_sec": 0 00:25:10.874 } 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "method": "bdev_iscsi_set_options", 00:25:10.874 "params": { 00:25:10.874 "timeout_sec": 30 00:25:10.874 } 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "method": "bdev_nvme_set_options", 00:25:10.874 "params": { 00:25:10.874 "action_on_timeout": "none", 00:25:10.874 "timeout_us": 0, 00:25:10.874 "timeout_admin_us": 0, 00:25:10.874 "keep_alive_timeout_ms": 10000, 00:25:10.874 "arbitration_burst": 0, 00:25:10.874 "low_priority_weight": 0, 00:25:10.874 "medium_priority_weight": 0, 00:25:10.874 "high_priority_weight": 0, 00:25:10.874 "nvme_adminq_poll_period_us": 10000, 00:25:10.874 "nvme_ioq_poll_period_us": 0, 00:25:10.874 "io_queue_requests": 512, 00:25:10.874 "delay_cmd_submit": true, 00:25:10.874 "transport_retry_count": 4, 00:25:10.874 "bdev_retry_count": 3, 00:25:10.874 "transport_ack_timeout": 0, 00:25:10.874 "ctrlr_loss_timeout_sec": 0, 00:25:10.874 "reconnect_delay_sec": 0, 00:25:10.874 "fast_io_fail_timeout_sec": 0, 00:25:10.874 "disable_auto_failback": false, 00:25:10.874 "generate_uuids": false, 00:25:10.874 "transport_tos": 0, 00:25:10.874 "nvme_error_stat": false, 00:25:10.874 "rdma_srq_size": 0, 00:25:10.874 "io_path_stat": false, 00:25:10.874 "allow_accel_sequence": false, 00:25:10.874 "rdma_max_cq_size": 0, 00:25:10.874 "rdma_cm_event_timeout_ms": 0, 00:25:10.874 "dhchap_digests": [ 00:25:10.874 "sha256", 00:25:10.874 "sha384", 00:25:10.874 "sha512" 00:25:10.874 ], 00:25:10.874 "dhchap_dhgroups": [ 00:25:10.874 "null", 00:25:10.874 "ffdhe2048", 00:25:10.874 "ffdhe3072", 00:25:10.874 "ffdhe4096", 00:25:10.874 "ffdhe6144", 00:25:10.874 "ffdhe8192" 00:25:10.874 ] 00:25:10.874 } 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "method": "bdev_nvme_attach_controller", 00:25:10.874 "params": { 00:25:10.874 "name": "nvme0", 00:25:10.874 "trtype": "TCP", 00:25:10.874 "adrfam": "IPv4", 00:25:10.874 "traddr": "127.0.0.1", 00:25:10.874 "trsvcid": "4420", 00:25:10.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:10.874 "prchk_reftag": false, 00:25:10.874 "prchk_guard": false, 00:25:10.874 "ctrlr_loss_timeout_sec": 0, 00:25:10.874 "reconnect_delay_sec": 0, 00:25:10.874 "fast_io_fail_timeout_sec": 0, 00:25:10.874 "psk": "key0", 00:25:10.874 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:10.874 "hdgst": false, 00:25:10.874 "ddgst": false, 00:25:10.874 "multipath": "multipath" 00:25:10.874 } 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "method": "bdev_nvme_set_hotplug", 00:25:10.874 "params": { 00:25:10.874 "period_us": 100000, 00:25:10.874 "enable": false 00:25:10.874 } 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "method": "bdev_wait_for_examine" 00:25:10.874 } 00:25:10.874 ] 00:25:10.874 }, 00:25:10.874 { 00:25:10.874 "subsystem": "nbd", 00:25:10.874 "config": [] 00:25:10.874 } 00:25:10.874 ] 00:25:10.874 }' 00:25:10.874 16:20:17 keyring_file -- keyring/file.sh@115 -- # killprocess 100720 00:25:10.874 16:20:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 100720 ']' 00:25:10.874 16:20:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 100720 00:25:10.874 16:20:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:10.874 16:20:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.874 16:20:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100720 00:25:10.874 killing process with pid 100720 00:25:10.874 Received shutdown signal, test time was about 1.000000 seconds 00:25:10.874 00:25:10.874 Latency(us) 00:25:10.874 [2024-11-19T16:20:17.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:10.874 [2024-11-19T16:20:17.589Z] =================================================================================================================== 00:25:10.874 [2024-11-19T16:20:17.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:10.874 16:20:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:10.874 16:20:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:10.874 16:20:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100720' 00:25:10.874 16:20:17 keyring_file -- common/autotest_common.sh@973 -- # kill 100720 00:25:10.874 16:20:17 keyring_file -- common/autotest_common.sh@978 -- # wait 100720 00:25:11.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:11.135 16:20:17 keyring_file -- keyring/file.sh@118 -- # bperfpid=100970 00:25:11.135 16:20:17 keyring_file -- keyring/file.sh@120 -- # waitforlisten 100970 /var/tmp/bperf.sock 00:25:11.135 16:20:17 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 100970 ']' 00:25:11.135 16:20:17 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:11.135 16:20:17 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:11.135 16:20:17 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.135 16:20:17 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:11.135 16:20:17 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.135 16:20:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:11.135 16:20:17 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:11.135 "subsystems": [ 00:25:11.135 { 00:25:11.135 "subsystem": "keyring", 00:25:11.135 "config": [ 00:25:11.135 { 00:25:11.135 "method": "keyring_file_add_key", 00:25:11.135 "params": { 00:25:11.135 "name": "key0", 00:25:11.135 "path": "/tmp/tmp.LX933iacAM" 00:25:11.135 } 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "method": "keyring_file_add_key", 00:25:11.135 "params": { 00:25:11.135 "name": "key1", 00:25:11.135 "path": "/tmp/tmp.RraNB4aCri" 00:25:11.135 } 00:25:11.135 } 00:25:11.135 ] 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "subsystem": "iobuf", 00:25:11.135 "config": [ 00:25:11.135 { 00:25:11.135 "method": "iobuf_set_options", 00:25:11.135 "params": { 00:25:11.135 "small_pool_count": 8192, 00:25:11.135 "large_pool_count": 1024, 00:25:11.135 "small_bufsize": 8192, 00:25:11.135 "large_bufsize": 135168, 00:25:11.135 "enable_numa": false 00:25:11.135 } 00:25:11.135 } 00:25:11.135 ] 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "subsystem": "sock", 00:25:11.135 "config": [ 00:25:11.135 { 00:25:11.135 "method": "sock_set_default_impl", 00:25:11.135 "params": { 00:25:11.135 "impl_name": "uring" 00:25:11.135 } 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "method": "sock_impl_set_options", 00:25:11.135 "params": { 00:25:11.135 "impl_name": "ssl", 00:25:11.135 "recv_buf_size": 4096, 00:25:11.135 "send_buf_size": 4096, 00:25:11.135 "enable_recv_pipe": true, 00:25:11.135 "enable_quickack": false, 00:25:11.135 "enable_placement_id": 0, 00:25:11.135 "enable_zerocopy_send_server": true, 00:25:11.135 "enable_zerocopy_send_client": false, 00:25:11.135 "zerocopy_threshold": 0, 00:25:11.135 "tls_version": 0, 00:25:11.135 "enable_ktls": false 00:25:11.135 } 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "method": "sock_impl_set_options", 00:25:11.135 "params": { 00:25:11.135 "impl_name": "posix", 00:25:11.135 "recv_buf_size": 2097152, 00:25:11.135 "send_buf_size": 2097152, 00:25:11.135 "enable_recv_pipe": true, 00:25:11.135 "enable_quickack": false, 00:25:11.135 "enable_placement_id": 0, 00:25:11.135 "enable_zerocopy_send_server": true, 00:25:11.135 "enable_zerocopy_send_client": false, 00:25:11.135 "zerocopy_threshold": 0, 00:25:11.135 "tls_version": 0, 00:25:11.135 "enable_ktls": false 00:25:11.135 } 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "method": "sock_impl_set_options", 00:25:11.135 "params": { 00:25:11.135 "impl_name": "uring", 00:25:11.135 "recv_buf_size": 2097152, 00:25:11.135 "send_buf_size": 2097152, 00:25:11.135 "enable_recv_pipe": true, 00:25:11.135 "enable_quickack": false, 00:25:11.135 "enable_placement_id": 0, 00:25:11.135 "enable_zerocopy_send_server": false, 00:25:11.135 "enable_zerocopy_send_client": false, 00:25:11.135 "zerocopy_threshold": 0, 00:25:11.135 "tls_version": 0, 00:25:11.135 "enable_ktls": false 00:25:11.135 } 00:25:11.135 } 00:25:11.135 ] 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "subsystem": "vmd", 00:25:11.135 "config": [] 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "subsystem": "accel", 00:25:11.135 "config": [ 00:25:11.135 { 00:25:11.135 "method": "accel_set_options", 00:25:11.135 "params": { 00:25:11.135 "small_cache_size": 128, 00:25:11.135 "large_cache_size": 16, 00:25:11.135 "task_count": 2048, 00:25:11.135 "sequence_count": 2048, 00:25:11.135 "buf_count": 2048 00:25:11.135 } 00:25:11.135 } 00:25:11.135 ] 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "subsystem": "bdev", 00:25:11.135 "config": [ 00:25:11.135 { 00:25:11.135 "method": "bdev_set_options", 00:25:11.135 "params": { 00:25:11.135 "bdev_io_pool_size": 65535, 00:25:11.135 "bdev_io_cache_size": 256, 00:25:11.135 "bdev_auto_examine": true, 00:25:11.135 "iobuf_small_cache_size": 128, 00:25:11.135 "iobuf_large_cache_size": 16 00:25:11.135 } 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "method": "bdev_raid_set_options", 00:25:11.135 "params": { 00:25:11.135 "process_window_size_kb": 1024, 00:25:11.135 "process_max_bandwidth_mb_sec": 0 00:25:11.135 } 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "method": "bdev_iscsi_set_options", 00:25:11.135 "params": { 00:25:11.135 "timeout_sec": 30 00:25:11.135 } 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "method": "bdev_nvme_set_options", 00:25:11.135 "params": { 00:25:11.135 "action_on_timeout": "none", 00:25:11.135 "timeout_us": 0, 00:25:11.135 "timeout_admin_us": 0, 00:25:11.135 "keep_alive_timeout_ms": 10000, 00:25:11.135 "arbitration_burst": 0, 00:25:11.135 "low_priority_weight": 0, 00:25:11.135 "medium_priority_weight": 0, 00:25:11.135 "high_priority_weight": 0, 00:25:11.135 "nvme_adminq_poll_period_us": 10000, 00:25:11.135 "nvme_ioq_poll_period_us": 0, 00:25:11.135 "io_queue_requests": 512, 00:25:11.135 "delay_cmd_submit": true, 00:25:11.135 "transport_retry_count": 4, 00:25:11.135 "bdev_retry_count": 3, 00:25:11.135 "transport_ack_timeout": 0, 00:25:11.135 "ctrlr_loss_timeout_sec": 0, 00:25:11.135 "reconnect_delay_sec": 0, 00:25:11.135 "fast_io_fail_timeout_sec": 0, 00:25:11.135 "disable_auto_failback": false, 00:25:11.135 "generate_uuids": false, 00:25:11.135 "transport_tos": 0, 00:25:11.135 "nvme_error_stat": false, 00:25:11.135 "rdma_srq_size": 0, 00:25:11.135 "io_path_stat": false, 00:25:11.135 "allow_accel_sequence": false, 00:25:11.135 "rdma_max_cq_size": 0, 00:25:11.135 "rdma_cm_event_timeout_ms": 0, 00:25:11.135 "dhchap_digests": [ 00:25:11.135 "sha256", 00:25:11.135 "sha384", 00:25:11.135 "sha512" 00:25:11.135 ], 00:25:11.135 "dhchap_dhgroups": [ 00:25:11.135 "null", 00:25:11.135 "ffdhe2048", 00:25:11.135 "ffdhe3072", 00:25:11.135 "ffdhe4096", 00:25:11.135 "ffdhe6144", 00:25:11.135 "ffdhe8192" 00:25:11.135 ] 00:25:11.135 } 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "method": "bdev_nvme_attach_controller", 00:25:11.135 "params": { 00:25:11.135 "name": "nvme0", 00:25:11.135 "trtype": "TCP", 00:25:11.135 "adrfam": "IPv4", 00:25:11.135 "traddr": "127.0.0.1", 00:25:11.135 "trsvcid": "4420", 00:25:11.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:11.135 "prchk_reftag": false, 00:25:11.135 "prchk_guard": false, 00:25:11.135 "ctrlr_loss_timeout_sec": 0, 00:25:11.135 "reconnect_delay_sec": 0, 00:25:11.135 "fast_io_fail_timeout_sec": 0, 00:25:11.135 "psk": "key0", 00:25:11.135 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:11.135 "hdgst": false, 00:25:11.135 "ddgst": false, 00:25:11.135 "multipath": "multipath" 00:25:11.135 } 00:25:11.135 }, 00:25:11.135 { 00:25:11.135 "method": "bdev_nvme_set_hotplug", 00:25:11.135 "params": { 00:25:11.136 "period_us": 100000, 00:25:11.136 "enable": false 00:25:11.136 } 00:25:11.136 }, 00:25:11.136 { 00:25:11.136 "method": "bdev_wait_for_examine" 00:25:11.136 } 00:25:11.136 ] 00:25:11.136 }, 00:25:11.136 { 00:25:11.136 "subsystem": "nbd", 00:25:11.136 "config": [] 00:25:11.136 } 00:25:11.136 ] 00:25:11.136 }' 00:25:11.136 [2024-11-19 16:20:17.668901] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:25:11.136 [2024-11-19 16:20:17.669186] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100970 ] 00:25:11.136 [2024-11-19 16:20:17.811873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.136 [2024-11-19 16:20:17.834372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.395 [2024-11-19 16:20:17.947822] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:11.395 [2024-11-19 16:20:17.986600] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.963 16:20:18 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.963 16:20:18 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:25:12.222 16:20:18 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:12.222 16:20:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:12.222 16:20:18 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:12.481 16:20:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:12.481 16:20:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:12.481 16:20:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:12.481 16:20:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:12.481 16:20:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:12.481 16:20:18 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:12.481 16:20:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:12.739 16:20:19 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:12.739 16:20:19 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:12.739 16:20:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:12.739 16:20:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:12.739 16:20:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:12.739 16:20:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:12.739 16:20:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:12.997 16:20:19 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:12.997 16:20:19 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:12.997 16:20:19 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:12.997 16:20:19 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:13.256 16:20:19 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:13.256 16:20:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:13.256 16:20:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LX933iacAM /tmp/tmp.RraNB4aCri 00:25:13.256 16:20:19 keyring_file -- keyring/file.sh@20 -- # killprocess 100970 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 100970 ']' 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 100970 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100970 00:25:13.256 killing process with pid 100970 00:25:13.256 Received shutdown signal, test time was about 1.000000 seconds 00:25:13.256 00:25:13.256 Latency(us) 00:25:13.256 [2024-11-19T16:20:19.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.256 [2024-11-19T16:20:19.971Z] =================================================================================================================== 00:25:13.256 [2024-11-19T16:20:19.971Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100970' 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@973 -- # kill 100970 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@978 -- # wait 100970 00:25:13.256 16:20:19 keyring_file -- keyring/file.sh@21 -- # killprocess 100715 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 100715 ']' 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@958 -- # kill -0 100715 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@959 -- # uname 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.256 16:20:19 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100715 00:25:13.515 killing process with pid 100715 00:25:13.515 16:20:19 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:13.515 16:20:19 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:13.515 16:20:19 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100715' 00:25:13.515 16:20:19 keyring_file -- common/autotest_common.sh@973 -- # kill 100715 00:25:13.515 16:20:19 keyring_file -- common/autotest_common.sh@978 -- # wait 100715 00:25:13.515 00:25:13.515 real 0m15.157s 00:25:13.515 user 0m39.527s 00:25:13.515 sys 0m2.774s 00:25:13.515 16:20:20 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:13.515 ************************************ 00:25:13.515 END TEST keyring_file 00:25:13.515 ************************************ 00:25:13.515 16:20:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:13.515 16:20:20 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:25:13.515 16:20:20 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:13.515 16:20:20 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:13.515 16:20:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:13.515 16:20:20 -- common/autotest_common.sh@10 -- # set +x 00:25:13.776 ************************************ 00:25:13.776 START TEST keyring_linux 00:25:13.776 ************************************ 00:25:13.776 16:20:20 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:13.776 Joined session keyring: 61554496 00:25:13.776 * Looking for test storage... 00:25:13.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:13.776 16:20:20 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:13.776 16:20:20 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:13.776 16:20:20 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:25:13.776 16:20:20 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:13.776 16:20:20 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:13.776 16:20:20 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:13.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.776 --rc genhtml_branch_coverage=1 00:25:13.776 --rc genhtml_function_coverage=1 00:25:13.776 --rc genhtml_legend=1 00:25:13.776 --rc geninfo_all_blocks=1 00:25:13.776 --rc geninfo_unexecuted_blocks=1 00:25:13.776 00:25:13.776 ' 00:25:13.776 16:20:20 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:13.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.776 --rc genhtml_branch_coverage=1 00:25:13.776 --rc genhtml_function_coverage=1 00:25:13.776 --rc genhtml_legend=1 00:25:13.776 --rc geninfo_all_blocks=1 00:25:13.776 --rc geninfo_unexecuted_blocks=1 00:25:13.776 00:25:13.776 ' 00:25:13.776 16:20:20 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:13.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.776 --rc genhtml_branch_coverage=1 00:25:13.776 --rc genhtml_function_coverage=1 00:25:13.776 --rc genhtml_legend=1 00:25:13.776 --rc geninfo_all_blocks=1 00:25:13.776 --rc geninfo_unexecuted_blocks=1 00:25:13.776 00:25:13.776 ' 00:25:13.776 16:20:20 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:13.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:13.776 --rc genhtml_branch_coverage=1 00:25:13.776 --rc genhtml_function_coverage=1 00:25:13.776 --rc genhtml_legend=1 00:25:13.776 --rc geninfo_all_blocks=1 00:25:13.776 --rc geninfo_unexecuted_blocks=1 00:25:13.776 00:25:13.776 ' 00:25:13.776 16:20:20 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:13.776 16:20:20 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=92a6f107-e459-4aaa-bfee-246c0e15cbd1 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:13.776 16:20:20 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:13.776 16:20:20 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:13.777 16:20:20 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:13.777 16:20:20 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.777 16:20:20 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.777 16:20:20 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.777 16:20:20 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:13.777 16:20:20 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:13.777 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:13.777 16:20:20 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:13.777 16:20:20 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:13.777 16:20:20 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:13.777 16:20:20 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:13.777 16:20:20 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:13.777 16:20:20 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:13.777 16:20:20 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:13.777 16:20:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:13.777 16:20:20 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:13.777 16:20:20 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:13.777 16:20:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:13.777 16:20:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:13.777 16:20:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:25:13.777 16:20:20 keyring_linux -- nvmf/common.sh@733 -- # python - 00:25:14.037 16:20:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:14.037 /tmp/:spdk-test:key0 00:25:14.037 16:20:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:14.037 16:20:20 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:14.037 16:20:20 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:14.037 16:20:20 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:14.037 16:20:20 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:14.037 16:20:20 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:14.037 16:20:20 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:14.037 16:20:20 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:14.037 16:20:20 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:14.037 16:20:20 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:25:14.037 16:20:20 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:25:14.037 16:20:20 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:25:14.037 16:20:20 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:25:14.037 16:20:20 keyring_linux -- nvmf/common.sh@733 -- # python - 00:25:14.037 16:20:20 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:14.037 /tmp/:spdk-test:key1 00:25:14.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.037 16:20:20 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:14.037 16:20:20 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=101096 00:25:14.037 16:20:20 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.037 16:20:20 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 101096 00:25:14.037 16:20:20 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 101096 ']' 00:25:14.037 16:20:20 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.037 16:20:20 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.037 16:20:20 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.037 16:20:20 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.037 16:20:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:14.037 [2024-11-19 16:20:20.656382] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:25:14.037 [2024-11-19 16:20:20.656659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101096 ] 00:25:14.297 [2024-11-19 16:20:20.800299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.297 [2024-11-19 16:20:20.819541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.297 [2024-11-19 16:20:20.852501] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:14.297 16:20:20 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.297 16:20:20 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:25:14.297 16:20:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:14.297 16:20:20 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.297 16:20:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:14.297 [2024-11-19 16:20:20.972558] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.297 null0 00:25:14.297 [2024-11-19 16:20:21.004558] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:14.297 [2024-11-19 16:20:21.004905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:14.556 16:20:21 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.556 16:20:21 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:14.556 222077968 00:25:14.556 16:20:21 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:14.556 285933027 00:25:14.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:14.556 16:20:21 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=101102 00:25:14.556 16:20:21 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:14.556 16:20:21 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 101102 /var/tmp/bperf.sock 00:25:14.556 16:20:21 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 101102 ']' 00:25:14.556 16:20:21 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:14.556 16:20:21 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.556 16:20:21 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:14.556 16:20:21 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.556 16:20:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:14.556 [2024-11-19 16:20:21.093957] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 23.11.0 initialization... 00:25:14.556 [2024-11-19 16:20:21.094257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101102 ] 00:25:14.556 [2024-11-19 16:20:21.244070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.556 [2024-11-19 16:20:21.264123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.815 16:20:21 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.815 16:20:21 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:25:14.815 16:20:21 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:14.816 16:20:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:15.075 16:20:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:15.075 16:20:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:15.334 [2024-11-19 16:20:21.932071] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:15.334 16:20:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:15.334 16:20:21 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:15.594 [2024-11-19 16:20:22.197477] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:15.594 nvme0n1 00:25:15.594 16:20:22 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:15.594 16:20:22 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:15.594 16:20:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:15.594 16:20:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:15.594 16:20:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:15.594 16:20:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:15.853 16:20:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:15.853 16:20:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:15.853 16:20:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:15.853 16:20:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:15.853 16:20:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:15.853 16:20:22 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:15.853 16:20:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:16.420 16:20:22 keyring_linux -- keyring/linux.sh@25 -- # sn=222077968 00:25:16.420 16:20:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:16.420 16:20:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:16.420 16:20:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 222077968 == \2\2\2\0\7\7\9\6\8 ]] 00:25:16.420 16:20:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 222077968 00:25:16.420 16:20:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:16.420 16:20:22 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:16.420 Running I/O for 1 seconds... 00:25:17.401 12105.00 IOPS, 47.29 MiB/s 00:25:17.401 Latency(us) 00:25:17.401 [2024-11-19T16:20:24.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.401 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:17.401 nvme0n1 : 1.01 12113.28 47.32 0.00 0.00 10510.90 3842.79 12868.89 00:25:17.401 [2024-11-19T16:20:24.116Z] =================================================================================================================== 00:25:17.401 [2024-11-19T16:20:24.116Z] Total : 12113.28 47.32 0.00 0.00 10510.90 3842.79 12868.89 00:25:17.401 { 00:25:17.401 "results": [ 00:25:17.401 { 00:25:17.401 "job": "nvme0n1", 00:25:17.401 "core_mask": "0x2", 00:25:17.401 "workload": "randread", 00:25:17.401 "status": "finished", 00:25:17.401 "queue_depth": 128, 00:25:17.401 "io_size": 4096, 00:25:17.401 "runtime": 1.009883, 00:25:17.401 "iops": 12113.284410174248, 00:25:17.401 "mibps": 47.31751722724316, 00:25:17.401 "io_failed": 0, 00:25:17.401 "io_timeout": 0, 00:25:17.401 "avg_latency_us": 10510.901622585703, 00:25:17.401 "min_latency_us": 3842.7927272727275, 00:25:17.401 "max_latency_us": 12868.887272727272 00:25:17.401 } 00:25:17.401 ], 00:25:17.401 "core_count": 1 00:25:17.401 } 00:25:17.401 16:20:24 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:17.402 16:20:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:17.660 16:20:24 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:17.660 16:20:24 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:17.660 16:20:24 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:17.660 16:20:24 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:17.660 16:20:24 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:17.660 16:20:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:18.229 16:20:24 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:18.229 16:20:24 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:18.229 16:20:24 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:18.229 16:20:24 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.229 16:20:24 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:25:18.229 16:20:24 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.229 16:20:24 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:25:18.229 16:20:24 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:18.229 16:20:24 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:25:18.229 16:20:24 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:18.229 16:20:24 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.229 16:20:24 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:18.230 [2024-11-19 16:20:24.908392] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:18.230 [2024-11-19 16:20:24.908474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0c150 (107): Transport endpoint is not connected 00:25:18.230 [2024-11-19 16:20:24.909466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0c150 (9): Bad file descriptor 00:25:18.230 [2024-11-19 16:20:24.910463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:25:18.230 [2024-11-19 16:20:24.910518] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:18.230 [2024-11-19 16:20:24.910529] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:18.230 [2024-11-19 16:20:24.910539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:25:18.230 request: 00:25:18.230 { 00:25:18.230 "name": "nvme0", 00:25:18.230 "trtype": "tcp", 00:25:18.230 "traddr": "127.0.0.1", 00:25:18.230 "adrfam": "ipv4", 00:25:18.230 "trsvcid": "4420", 00:25:18.230 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:18.230 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:18.230 "prchk_reftag": false, 00:25:18.230 "prchk_guard": false, 00:25:18.230 "hdgst": false, 00:25:18.230 "ddgst": false, 00:25:18.230 "psk": ":spdk-test:key1", 00:25:18.230 "allow_unrecognized_csi": false, 00:25:18.230 "method": "bdev_nvme_attach_controller", 00:25:18.230 "req_id": 1 00:25:18.230 } 00:25:18.230 Got JSON-RPC error response 00:25:18.230 response: 00:25:18.230 { 00:25:18.230 "code": -5, 00:25:18.230 "message": "Input/output error" 00:25:18.230 } 00:25:18.230 16:20:24 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:25:18.230 16:20:24 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:18.230 16:20:24 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:18.230 16:20:24 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:18.230 16:20:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:18.230 16:20:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:18.230 16:20:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:18.230 16:20:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:18.230 16:20:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:18.230 16:20:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:18.230 16:20:24 keyring_linux -- keyring/linux.sh@33 -- # sn=222077968 00:25:18.230 16:20:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 222077968 00:25:18.230 1 links removed 00:25:18.489 16:20:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:18.489 16:20:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:18.489 16:20:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:18.489 16:20:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:18.489 16:20:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:18.489 16:20:24 keyring_linux -- keyring/linux.sh@33 -- # sn=285933027 00:25:18.489 16:20:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 285933027 00:25:18.489 1 links removed 00:25:18.489 16:20:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 101102 00:25:18.489 16:20:24 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 101102 ']' 00:25:18.489 16:20:24 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 101102 00:25:18.489 16:20:24 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:25:18.489 16:20:24 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.489 16:20:24 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101102 00:25:18.489 killing process with pid 101102 00:25:18.489 Received shutdown signal, test time was about 1.000000 seconds 00:25:18.489 00:25:18.489 Latency(us) 00:25:18.489 [2024-11-19T16:20:25.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:18.489 [2024-11-19T16:20:25.204Z] =================================================================================================================== 00:25:18.489 [2024-11-19T16:20:25.204Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:18.489 16:20:24 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:18.489 16:20:24 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:18.489 16:20:24 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101102' 00:25:18.489 16:20:24 keyring_linux -- common/autotest_common.sh@973 -- # kill 101102 00:25:18.489 16:20:24 keyring_linux -- common/autotest_common.sh@978 -- # wait 101102 00:25:18.489 16:20:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 101096 00:25:18.489 16:20:25 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 101096 ']' 00:25:18.489 16:20:25 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 101096 00:25:18.489 16:20:25 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:25:18.489 16:20:25 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.489 16:20:25 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101096 00:25:18.489 killing process with pid 101096 00:25:18.489 16:20:25 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.489 16:20:25 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.489 16:20:25 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101096' 00:25:18.489 16:20:25 keyring_linux -- common/autotest_common.sh@973 -- # kill 101096 00:25:18.489 16:20:25 keyring_linux -- common/autotest_common.sh@978 -- # wait 101096 00:25:18.749 00:25:18.749 real 0m5.116s 00:25:18.749 user 0m10.584s 00:25:18.749 sys 0m1.357s 00:25:18.749 ************************************ 00:25:18.749 END TEST keyring_linux 00:25:18.749 ************************************ 00:25:18.749 16:20:25 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:18.749 16:20:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:18.749 16:20:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:18.749 16:20:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:18.749 16:20:25 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:25:18.749 16:20:25 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:25:18.749 16:20:25 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:25:18.749 16:20:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:18.749 16:20:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:18.749 16:20:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:18.749 16:20:25 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:25:18.749 16:20:25 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:18.749 16:20:25 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:25:18.749 16:20:25 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:18.749 16:20:25 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:18.749 16:20:25 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:25:18.749 16:20:25 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:25:18.749 16:20:25 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:25:18.749 16:20:25 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:25:18.749 16:20:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:18.749 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:25:18.749 16:20:25 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:25:18.749 16:20:25 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:25:18.749 16:20:25 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:25:18.749 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:25:20.655 INFO: APP EXITING 00:25:20.655 INFO: killing all VMs 00:25:20.655 INFO: killing vhost app 00:25:20.655 INFO: EXIT DONE 00:25:21.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:21.593 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:21.593 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:22.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:22.161 Cleaning 00:25:22.161 Removing: /var/run/dpdk/spdk0/config 00:25:22.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:22.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:22.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:22.161 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:22.161 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:22.161 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:22.161 Removing: /var/run/dpdk/spdk1/config 00:25:22.161 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:22.161 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:22.161 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:22.161 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:22.162 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:22.162 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:22.162 Removing: /var/run/dpdk/spdk2/config 00:25:22.162 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:22.162 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:22.162 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:22.162 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:22.162 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:22.162 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:22.162 Removing: /var/run/dpdk/spdk3/config 00:25:22.162 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:22.162 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:22.162 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:22.162 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:22.162 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:22.421 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:22.421 Removing: /var/run/dpdk/spdk4/config 00:25:22.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:22.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:22.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:22.421 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:22.421 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:22.421 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:22.421 Removing: /dev/shm/nvmf_trace.0 00:25:22.421 Removing: /dev/shm/spdk_tgt_trace.pid69985 00:25:22.421 Removing: /var/run/dpdk/spdk0 00:25:22.421 Removing: /var/run/dpdk/spdk1 00:25:22.421 Removing: /var/run/dpdk/spdk2 00:25:22.421 Removing: /var/run/dpdk/spdk3 00:25:22.421 Removing: /var/run/dpdk/spdk4 00:25:22.421 Removing: /var/run/dpdk/spdk_pid100182 00:25:22.421 Removing: /var/run/dpdk/spdk_pid100212 00:25:22.421 Removing: /var/run/dpdk/spdk_pid100247 00:25:22.421 Removing: /var/run/dpdk/spdk_pid100715 00:25:22.421 Removing: /var/run/dpdk/spdk_pid100720 00:25:22.421 Removing: /var/run/dpdk/spdk_pid100970 00:25:22.421 Removing: /var/run/dpdk/spdk_pid101096 00:25:22.421 Removing: /var/run/dpdk/spdk_pid101102 00:25:22.421 Removing: /var/run/dpdk/spdk_pid69838 00:25:22.421 Removing: /var/run/dpdk/spdk_pid69985 00:25:22.421 Removing: /var/run/dpdk/spdk_pid70184 00:25:22.421 Removing: /var/run/dpdk/spdk_pid70265 00:25:22.421 Removing: /var/run/dpdk/spdk_pid70285 00:25:22.421 Removing: /var/run/dpdk/spdk_pid70389 00:25:22.421 Removing: /var/run/dpdk/spdk_pid70399 00:25:22.421 Removing: /var/run/dpdk/spdk_pid70533 00:25:22.421 Removing: /var/run/dpdk/spdk_pid70733 00:25:22.421 Removing: /var/run/dpdk/spdk_pid70883 00:25:22.421 Removing: /var/run/dpdk/spdk_pid70961 00:25:22.421 Removing: /var/run/dpdk/spdk_pid71039 00:25:22.421 Removing: /var/run/dpdk/spdk_pid71131 00:25:22.421 Removing: /var/run/dpdk/spdk_pid71203 00:25:22.421 Removing: /var/run/dpdk/spdk_pid71236 00:25:22.421 Removing: /var/run/dpdk/spdk_pid71266 00:25:22.421 Removing: /var/run/dpdk/spdk_pid71335 00:25:22.421 Removing: /var/run/dpdk/spdk_pid71437 00:25:22.421 Removing: /var/run/dpdk/spdk_pid71875 00:25:22.421 Removing: /var/run/dpdk/spdk_pid71922 00:25:22.421 Removing: /var/run/dpdk/spdk_pid71965 00:25:22.421 Removing: /var/run/dpdk/spdk_pid71968 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72030 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72038 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72100 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72108 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72148 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72166 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72212 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72221 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72347 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72383 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72465 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72792 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72804 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72835 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72848 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72864 00:25:22.421 Removing: /var/run/dpdk/spdk_pid72883 00:25:22.422 Removing: /var/run/dpdk/spdk_pid72896 00:25:22.422 Removing: /var/run/dpdk/spdk_pid72912 00:25:22.422 Removing: /var/run/dpdk/spdk_pid72931 00:25:22.422 Removing: /var/run/dpdk/spdk_pid72939 00:25:22.422 Removing: /var/run/dpdk/spdk_pid72960 00:25:22.422 Removing: /var/run/dpdk/spdk_pid72979 00:25:22.422 Removing: /var/run/dpdk/spdk_pid72987 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73002 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73021 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73035 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73045 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73064 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73083 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73093 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73129 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73137 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73166 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73233 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73267 00:25:22.422 Removing: /var/run/dpdk/spdk_pid73271 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73298 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73309 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73311 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73359 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73367 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73390 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73405 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73409 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73414 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73428 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73432 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73447 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73451 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73474 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73506 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73510 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73545 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73549 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73551 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73597 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73603 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73630 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73637 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73639 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73652 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73654 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73656 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73669 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73671 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73753 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73790 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73897 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73930 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73975 00:25:22.680 Removing: /var/run/dpdk/spdk_pid73990 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74006 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74021 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74052 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74068 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74146 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74157 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74195 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74257 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74302 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74333 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74427 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74464 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74502 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74723 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74815 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74849 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74873 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74901 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74940 00:25:22.680 Removing: /var/run/dpdk/spdk_pid74968 00:25:22.680 Removing: /var/run/dpdk/spdk_pid75005 00:25:22.680 Removing: /var/run/dpdk/spdk_pid75392 00:25:22.680 Removing: /var/run/dpdk/spdk_pid75432 00:25:22.680 Removing: /var/run/dpdk/spdk_pid75775 00:25:22.680 Removing: /var/run/dpdk/spdk_pid76229 00:25:22.680 Removing: /var/run/dpdk/spdk_pid76491 00:25:22.680 Removing: /var/run/dpdk/spdk_pid77327 00:25:22.680 Removing: /var/run/dpdk/spdk_pid78241 00:25:22.680 Removing: /var/run/dpdk/spdk_pid78358 00:25:22.680 Removing: /var/run/dpdk/spdk_pid78420 00:25:22.680 Removing: /var/run/dpdk/spdk_pid79828 00:25:22.680 Removing: /var/run/dpdk/spdk_pid80140 00:25:22.680 Removing: /var/run/dpdk/spdk_pid83814 00:25:22.680 Removing: /var/run/dpdk/spdk_pid84167 00:25:22.680 Removing: /var/run/dpdk/spdk_pid84277 00:25:22.680 Removing: /var/run/dpdk/spdk_pid84404 00:25:22.680 Removing: /var/run/dpdk/spdk_pid84425 00:25:22.680 Removing: /var/run/dpdk/spdk_pid84459 00:25:22.680 Removing: /var/run/dpdk/spdk_pid84480 00:25:22.680 Removing: /var/run/dpdk/spdk_pid84565 00:25:22.680 Removing: /var/run/dpdk/spdk_pid84688 00:25:22.680 Removing: /var/run/dpdk/spdk_pid84829 00:25:22.680 Removing: /var/run/dpdk/spdk_pid84903 00:25:22.680 Removing: /var/run/dpdk/spdk_pid85090 00:25:22.939 Removing: /var/run/dpdk/spdk_pid85158 00:25:22.939 Removing: /var/run/dpdk/spdk_pid85242 00:25:22.939 Removing: /var/run/dpdk/spdk_pid85590 00:25:22.939 Removing: /var/run/dpdk/spdk_pid85987 00:25:22.939 Removing: /var/run/dpdk/spdk_pid85988 00:25:22.939 Removing: /var/run/dpdk/spdk_pid85989 00:25:22.939 Removing: /var/run/dpdk/spdk_pid86248 00:25:22.939 Removing: /var/run/dpdk/spdk_pid86484 00:25:22.939 Removing: /var/run/dpdk/spdk_pid86491 00:25:22.939 Removing: /var/run/dpdk/spdk_pid88805 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89180 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89183 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89505 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89520 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89540 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89565 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89576 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89664 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89666 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89774 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89782 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89890 00:25:22.939 Removing: /var/run/dpdk/spdk_pid89892 00:25:22.939 Removing: /var/run/dpdk/spdk_pid90332 00:25:22.939 Removing: /var/run/dpdk/spdk_pid90381 00:25:22.939 Removing: /var/run/dpdk/spdk_pid90484 00:25:22.939 Removing: /var/run/dpdk/spdk_pid90563 00:25:22.939 Removing: /var/run/dpdk/spdk_pid90914 00:25:22.939 Removing: /var/run/dpdk/spdk_pid91104 00:25:22.939 Removing: /var/run/dpdk/spdk_pid91521 00:25:22.939 Removing: /var/run/dpdk/spdk_pid92077 00:25:22.939 Removing: /var/run/dpdk/spdk_pid92926 00:25:22.939 Removing: /var/run/dpdk/spdk_pid93560 00:25:22.939 Removing: /var/run/dpdk/spdk_pid93562 00:25:22.939 Removing: /var/run/dpdk/spdk_pid95558 00:25:22.939 Removing: /var/run/dpdk/spdk_pid95608 00:25:22.939 Removing: /var/run/dpdk/spdk_pid95661 00:25:22.939 Removing: /var/run/dpdk/spdk_pid95709 00:25:22.939 Removing: /var/run/dpdk/spdk_pid95817 00:25:22.939 Removing: /var/run/dpdk/spdk_pid95865 00:25:22.939 Removing: /var/run/dpdk/spdk_pid95912 00:25:22.939 Removing: /var/run/dpdk/spdk_pid95961 00:25:22.939 Removing: /var/run/dpdk/spdk_pid96312 00:25:22.939 Removing: /var/run/dpdk/spdk_pid97516 00:25:22.939 Removing: /var/run/dpdk/spdk_pid97654 00:25:22.939 Removing: /var/run/dpdk/spdk_pid97888 00:25:22.939 Removing: /var/run/dpdk/spdk_pid98469 00:25:22.939 Removing: /var/run/dpdk/spdk_pid98629 00:25:22.939 Removing: /var/run/dpdk/spdk_pid98786 00:25:22.939 Removing: /var/run/dpdk/spdk_pid98877 00:25:22.939 Removing: /var/run/dpdk/spdk_pid99046 00:25:22.939 Removing: /var/run/dpdk/spdk_pid99155 00:25:22.939 Removing: /var/run/dpdk/spdk_pid99862 00:25:22.939 Removing: /var/run/dpdk/spdk_pid99896 00:25:22.939 Removing: /var/run/dpdk/spdk_pid99927 00:25:22.939 Clean 00:25:22.939 16:20:29 -- common/autotest_common.sh@1453 -- # return 0 00:25:22.939 16:20:29 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:25:22.939 16:20:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:22.939 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:25:23.198 16:20:29 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:25:23.198 16:20:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:23.198 16:20:29 -- common/autotest_common.sh@10 -- # set +x 00:25:23.198 16:20:29 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:23.198 16:20:29 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:23.198 16:20:29 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:23.198 16:20:29 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:25:23.198 16:20:29 -- spdk/autotest.sh@398 -- # hostname 00:25:23.198 16:20:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:23.456 geninfo: WARNING: invalid characters removed from testname! 00:25:55.563 16:20:58 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:55.823 16:21:02 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:59.114 16:21:05 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:01.650 16:21:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:04.197 16:21:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:07.490 16:21:13 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:10.024 16:21:16 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:10.024 16:21:16 -- spdk/autorun.sh@1 -- $ timing_finish 00:26:10.024 16:21:16 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:26:10.024 16:21:16 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:10.024 16:21:16 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:10.024 16:21:16 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:10.024 + [[ -n 5995 ]] 00:26:10.024 + sudo kill 5995 00:26:10.034 [Pipeline] } 00:26:10.049 [Pipeline] // timeout 00:26:10.054 [Pipeline] } 00:26:10.067 [Pipeline] // stage 00:26:10.072 [Pipeline] } 00:26:10.088 [Pipeline] // catchError 00:26:10.098 [Pipeline] stage 00:26:10.101 [Pipeline] { (Stop VM) 00:26:10.115 [Pipeline] sh 00:26:10.395 + vagrant halt 00:26:13.755 ==> default: Halting domain... 00:26:19.038 [Pipeline] sh 00:26:19.318 + vagrant destroy -f 00:26:22.606 ==> default: Removing domain... 00:26:22.619 [Pipeline] sh 00:26:22.899 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:22.908 [Pipeline] } 00:26:22.924 [Pipeline] // stage 00:26:22.929 [Pipeline] } 00:26:22.944 [Pipeline] // dir 00:26:22.951 [Pipeline] } 00:26:22.966 [Pipeline] // wrap 00:26:22.972 [Pipeline] } 00:26:22.985 [Pipeline] // catchError 00:26:22.996 [Pipeline] stage 00:26:22.998 [Pipeline] { (Epilogue) 00:26:23.012 [Pipeline] sh 00:26:23.291 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:29.867 [Pipeline] catchError 00:26:29.869 [Pipeline] { 00:26:29.882 [Pipeline] sh 00:26:30.221 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:30.480 Artifacts sizes are good 00:26:30.489 [Pipeline] } 00:26:30.504 [Pipeline] // catchError 00:26:30.516 [Pipeline] archiveArtifacts 00:26:30.524 Archiving artifacts 00:26:30.663 [Pipeline] cleanWs 00:26:30.675 [WS-CLEANUP] Deleting project workspace... 00:26:30.675 [WS-CLEANUP] Deferred wipeout is used... 00:26:30.681 [WS-CLEANUP] done 00:26:30.683 [Pipeline] } 00:26:30.700 [Pipeline] // stage 00:26:30.705 [Pipeline] } 00:26:30.720 [Pipeline] // node 00:26:30.726 [Pipeline] End of Pipeline 00:26:30.769 Finished: SUCCESS